report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
The structure of the armed forces is based on the Total Force concept, which recognizes that all elements of the structure—active duty military personnel, reservists, defense contractors, host nation military and civilian personnel, and DOD federal civilian employees—contribute to national defense. In recent years, federal civilian personnel have deployed along with military personnel to participate in Operations Joint Endeavor, conducted in the countries of Bosnia-Herzegovina, Croatia, and Hungary; Joint Guardian, in Kosovo; and Desert Storm, in Southwest Asia. Further, since the beginning of the Global War on Terrorism, the role of DOD’s federal civilian personnel has expanded to include participation in combat support functions in Operations Enduring Freedom and Iraqi Freedom. DOD relies on the federal civilian personnel it deploys to support a range of essential missions, including intelligence collection, criminal investigations, and weapon systems acquisition and maintenance. To ensure that its federal civilian employees will deploy to combat zones and perform critical combat support functions in theater, DOD established the emergency-essential program in 1985. Under this program, DOD designates as “emergency-essential” those civilian employees whose positions are required to ensure the success of combat operations or the availability of combat-essential systems. DOD can deploy federal civilian employees either on a voluntary or involuntary basis to accomplish the DOD mission. DOD has established force health protection and surveillance policies aimed at assessing and reducing or preventing health risks for its deployed federal civilian personnel; however, the department lacks procedures to ensure the components’ full implementation of its policies. In reviewing DOD federal civilian deployment records and other electronic documentation at selected component locations, we found that these components lacked documentation to show that they had fully complied with DOD’s force health protection and surveillance policy requirements for some federal civilian personnel who deployed to Afghanistan and Iraq. As a larger issue, DOD’s policies did not require the centralized collection of data on the identity of its deployed civilians, their movements in theater, or their health status, further hindering its efforts to assess the overall effectiveness of its force health protection and surveillance capabilities. In August 2006, DOD issued a revised policy (to be effective in December 2006) that outlines procedures to address its lack of centralized deployment and health-related data. However, the procedures are not comprehensive enough to ensure that DOD will be sufficiently informed of the extent to which its components fully comply with its requirements to monitor the health of deployed federal civilians. The DOD components included in our review lacked documentation to show that they always implemented force health protection and surveillance requirements for deployed federal civilians. These requirements include completing (1) pre-deployment health assessments to ensure that only medically fit personnel deploy outside of the United States as part of a contingency or combat operation; (2) pre-deployment immunizations to address possible health threats in deployment locations; (3) pre-deployment medical screenings for tuberculosis and human immunodeficiency virus (HIV); and (4) post-deployment health assessments to document current health status, experiences, environmental exposures, and health concerns related to their work while deployed. DOD’s force health protection and surveillance policies require the components to assess the medical condition of federal civilians to ensure that only medically fit personnel deploy outside of the United States as part of a contingency or combat operation. The policies stipulate that all deploying civilian personnel are to complete pre-deployment health assessment forms within 30 days of their deployments, and health care providers are to review the assessments to confirm the civilians’ health readiness status and identify any needs for additional clinical evaluations prior to their deployments. While the components that we included in our review had procedures in place that would enable them to implement DOD’s pre-deployment health assessment policies, it was not clear to what extent they had done so. Our review of deployment records and other documentation at the selected component locations found that these components lacked documentation to show that some federal civilian personnel who deployed to Afghanistan and Iraq had received the required pre-deployment health assessments. For those deployed federal civilians in our review, we found that, overall, a small number of deployment records (52 out of 3,771) were missing documentation to show that they had received their pre-deployment health assessments, as reflected in table 1. As shown in table 1, the federal civilian deployment records we included in our review showed wide variation by location regarding documentation of pre-deployment health assessments, ranging from less than 1 percent to more than 90 percent. On an aggregate component-level basis, at the Navy location in our review, we found that documentation was missing for 19 of the 52 records in our review. At the Air Force locations, documentation was missing for 29 of the 37 records in our review. In contrast, all three Army locations had hard copy or electronic records which indicated that almost all of their federal deployed civilians had received pre-deployment health assessments. In addition to completing pre-deployment health assessment forms, DOD’s force health protection and surveillance policies stipulate that all DOD deploying federal civilians receive theater-specific immunizations to address possible health threats in deployment locations. Immunizations required for all civilian personnel who deploy to Afghanistan and Iraq include: hepatitis A (two-shot series); tetanus-diphtheria (within 10 years of deployment); smallpox (within 5 years of deployment); typhoid; and influenza (within the last 12 months of deployment). As reflected in table 2, based on the deployment records maintained by the components at locations included in our review, the overall number of federal civilian deployment records lacking documentation of only one of the required immunizations for deployment to Afghanistan and Iraq was 285 out of 3,771. However, 3,313 of the records we reviewed were missing documentation of two or more immunizations. At the Army’s Fort Bliss, our review of its electronic deployment data determined that none of its deployed federal civilians had documentation to show that they had received immunizations. Officials at this location stated that they believed some immunizations had been given; however, they could not provide documentation as evidence of this. DOD policies require deploying federal civilians to receive certain screenings, such as for tuberculosis and HIV. Table 3 indicates that 55 of the 3,771 federal civilian deployment records included in our review were lacking documentation of the required tuberculosis screening; and approximately 35 were lacking documentation of HIV screenings prior to deployment. DOD’s force health protection and surveillance policies also require returning DOD federal civilian personnel to undergo post-deployment health assessments to document current health status, experiences, environmental exposures, and health concerns related to their work while deployed. The post-deployment process begins within 5 days of civilians’ redeployment from the theater to their home or demobilization processing stations. DOD’s policies require civilian personnel to complete a post- deployment assessment that includes questions on health and exposure concerns. A health care provider is to review each assessment and recommend additional clinical evaluation or treatment as needed. As reflected in table 4, our review of deployment records at the selected component locations found that these components lacked documentation to show that most deployed federal civilians (3,525 out of 3,771) who deployed to Afghanistan and Iraq had received the required post- deployment health assessments upon their return to the United States. Federal civilian deployment records lacking evidence of post-deployment health assessments ranged from 3 at the U.S. Army Corps of Engineers Transatlantic Programs Center and Wright-Patterson Air Force Base, respectively, to 2,977 at Fort Bliss. Beyond the aforementioned weaknesses found in the selected components’ implementation of force health protection and surveillance requirements for deploying federal civilians, as a larger issue, DOD lacks comprehensive, centralized data that would enable it to readily identify its deployed civilians, track their movements in theater, or monitor their health status, further hindering efforts to assess the overall effectiveness of its force health protection and surveillance capabilities. The Defense Manpower Data Center (DMDC) is responsible for maintaining the department’s centralized system that currently collects location-specific deployment information for military servicemembers, such as grid coordinates, latitude/longitude coordinates, or geographic location codes. However, DOD has not taken steps to similarly maintain centralized data on its deployed federal civilians. In addition, DOD had not provided guidance that would require its components to track and report data on the locations and movements of DOD federal civilian personnel in theaters of operations. In the absence of such a requirement, each DOD component collected and reported aggregated data that identified the total number of DOD federal civilian personnel in a theater of operations, but each lacked the ability to gather, analyze, and report information that could be used to specifically identify individuals at risk for occupational and environmental exposures during deployments. In previously reporting on the military services’ implementation of DOD’s force health protection and surveillance policies in 2003, we highlighted the importance of knowing the identity of servicemembers who deployed during a given operation and of tracking their movements within the theater of operations as major elements of a military medical surveillance system. We further noted the Institute of Medicine’s finding that documentation on the location of units and individuals during a given deployment is important for epidemiological studies and appropriate medical care during and after deployments. For example, this information allows epidemiologists to study the incidences of disease patterns across populations of deployed servicemembers who may have been exposed to diseases and hazards within the theater, and health care professionals to treat their medical problems appropriately. Without location-specific information for all of its deployed federal civilians and centralized data in its department-level system, DOD limits its ability to ensure that sufficient and appropriate consideration will also be given to addressing the health care concerns of these individuals. DOD also had not provided guidance to the components that would require them to forward completed deployment health assessments for all federal civilians to the Army Medical Surveillance Activity (AMSA), where these assessments are suppose to be archived in the Defense Medical Surveillance System (DMSS), integrated with other historical and current data on personnel and deployments, and used to monitor the health of personnel who participate in deployments. The overall success of deployment force protection and surveillance efforts, in large measure, depends on the completeness of health assessment data. The lack of such data may hamper DOD’s ability to intervene in a timely manner to address health care problems that may arise from DOD federal civilian deployments to overseas locations in support of contingency operations. With increases in the department’s use of federal civilian personnel to support military operations, DOD officials have recognized the need for more complete and centralized location-specific deployment information and deployment-related health information on its deployed federal civilians. In this regard, in August 2006, the Office of the Under Secretary of Defense for Personnel and Readiness issued revised policy and program guidance that generally addressed the shortcomings in DOD’s force health protection and surveillance capabilities. The revised policy and guidance, scheduled to become effective in December 2006, require the components within 3 years, to electronically report (at least weekly) to DMDC, location-specific data for all deployed personnel, including federal civilians. In addition, the policy and guidance require the components to submit all completed health assessment forms to the AMSA for inclusion in DMSS. Nonetheless, DOD’s new policy is not comprehensive enough to ensure that the department will be sufficiently informed of the extent to which its components are complying with existing health protection requirements for its deployed federal civilians. Although the policy requires DOD components to report certain location-specific and health data for all of their deployed personnel, including federal civilians, it does not establish an oversight and quality assurance mechanism for assessing and ensuring the full implementation of the force health protection and surveillance requirements by all DOD components that our prior work has identified as essential in providing care to military personnel. In a September 2003 report on the Army’s and the Air Force’s compliance with force health protection policy for servicemembers, we noted that neither of the military services had fully complied with DOD’s force health protection and surveillance policies for many active duty servicemembers, including the policies requiring that servicemembers be assessed before and after deploying overseas and receive certain immunizations. We further noted that DOD, at that time, did not have an effective quality assurance program to provide oversight of, and ensure compliance with, the department’s force health protection and surveillance requirements, and that the lack of such a system was a major cause of the high rate of noncompliance that we identified at the units we visited. In response to a legislative mandate and our recommendation, DOD established an oversight mechanism to evaluate the success of its force health protection and surveillance policies in ensuring that servicemembers received pre- and post-deployment medical examinations and that record-keeping requirements were met. This oversight mechanism included (1) periodic site visits jointly conducted with staff from the Office of the Assistant Secretary for Health Affairs and staff from the military services to assess compliance with the deployment health requirements, (2) periodic reports from the services on their quality assurance programs, and (3) periodic reports from AMSA on health assessment data maintained in the centralized database. Until the department provides a similar oversight and quality assurance mechanism for its deployed federal civilians, it will not be effectively positioned to ensure compliance with its policies, or ensure the health care and protection of these individuals as they continue to support contingency operations. DOD has established medical treatment policies that cover its federal civilians while they are deployed to support contingency operations in Afghanistan and Iraq, and available workers’ compensation claims we reviewed confirmed that those deployed federal civilians received care consistent with the policies. These policies state that DOD federal civilians who require treatment for injuries or diseases sustained during overseas hostilities may be provided care under the DOD military health system. Thus, DOD’s deployed federal civilians may receive care through the military’s treatment facilities. As shown in figure 1, DOD’s military health system provides four levels of medical care to personnel who are injured or become ill while deployed. Specifically, medical treatment during a military contingency begins with level one care, which consists of basic first aid and emergency care at a unit in the theater of operation. The treatment then moves to a second level of care, where, at an Aid station, injured or ill personnel are examined and evaluated to determine their priority for continued movement outside of the theater of operation and to the next (third) level of care. At the third level, injured or ill personnel are treated in a medical installation staffed and equipped for resuscitation, surgery, and postoperative care. Finally, at the fourth level of care, which occurs far from the theater of operation, injured or ill personnel are treated in a hospital staffed and equipped for definitive care. Injured or ill DOD federal civilians deployed in support of contingency operations in Afghanistan and Iraq who require level four medical care are transported to DOD’s Regional Medical Center in Landstuhl, Germany. Injured or ill DOD federal civilians who cannot be returned to duty in theater are evacuated to the United States for continuation of medical care. In these cases (or where previously deployed federal civilians later identify injuries or diseases and subsequently request medical treatment), DOD’s policy provides for its federal civilians who require treatment for deployment-related injuries or occupational illnesses to receive medical care through either the military’s medical treatment facilities or civilian facilities. The policy stipulates that federal civilians who are injured or become ill as a result of their deployment must file a Federal Employees’ Compensation Act (FECA) claim with DOD, which then files a claim with the Department of Labor’s Office of Workers’ Compensation Programs (OWCP). The Department of Labor’s OWCP is responsible for making a decision to award or deny medical benefits. OWCP must establish—based on evidence provided by the DOD civilian—that the employee is eligible for workers’ compensation benefits due to the injury or disease for which the benefits are claimed. To obtain benefits under FECA, DOD federal civilians must show that (1) they were employed by the U.S. government, (2) they were injured (exposed) in the workplace, (3) they have filed a claim in a timely manner, (4) they have a disabling medical condition, and (5) there is a causal link between their medical condition and the injury or exposure. Three avenues of appeal are provided for DOD federal civilians in the event that the initial claim is denied: (1) reconsideration by an OWCP claims examiner, (2) a hearing or review of the written record by OWCP’s Branch of Hearings and Review, and (3) a review by the Employees’ Compensation Appeals Board. DOD’s medical treatment process and the OWCP’s claims process are shown in figure 2. Overall, the claims we reviewed showed that the DOD federal civilians who sustained injuries or diseases while deployed had received care that was consistent with DOD’s medical treatment policies. Specifically, in reviewing a sample of seven workers’ compensation claims (out of a universe of 83) filed under the Federal Employees’ Compensation Act by DOD federal civilians who deployed to Iraq, we found that in three cases where care was initiated in theater the affected federal civilians had received treatment in accordance with DOD’s policies. For example, in one case, a deployed federal civilian was treated for traumatic injuries at a hospital outside of the theater of operation and could not return to duty in theater because of the severity of the injuries sustained. The civilian was evacuated to the United States and received medical care through several of the military’s medical treatment facilities as well as through a civilian facility. Further, in all seven claims that we reviewed, DOD federal civilians who requested medical care after returning to the United States, had, in accordance with DOD’s policy, received initial medical examinations and/or treatment for their deployment-related injuries or illnesses and diseases through either military or civilian treatment facilities. While OWCP has primary responsibility for processing and approving all FECA claims for medical benefits, as noted earlier, the scope of our review did not include assessing actions taken by the Department of Labor’s OWCP in further processing workers’ compensation claims for injured or ill civilians and authorizing continuation of medical care once their claims were submitted for review. DOD provides a number of special pays and benefits to its federal civilian personnel who deploy in support of contingency operations, which are generally different in type and in amount from those provided to deployed military personnel. Both groups receive special pays, but the types and amounts differ. In our modeled scenarios, the overall amounts of compensation, which include special pays, were higher for DOD federal civilian personnel than for military personnel. DOD federal civilian personnel also receive different types and amounts of disability benefits, depending on specific program provisions and individual circumstances. Further, survivors of deceased DOD federal civilian and military personnel generally receive comparable types of cash survivor benefits—lump sum, recurring, or both—but benefit amounts differ for the two groups. Survivors of DOD federal civilian personnel, however, almost always receive lower noncash benefits than military personnel. DOD federal civilian and military personnel are both eligible to receive special pays to compensate them for the conditions of deployment. As shown in table 5, some of the types of special pays are similar for both DOD federal civilian and military personnel, although the amounts paid to each group differ. Other special pays were unique to each group. DOD federal civilian and military personnel deployed to posts with unusually difficult or unhealthful conditions or severe physical hardships are authorized a similar type of post (hardship) differential. In addition, danger pay is granted to both groups serving at a post where civil insurrection, civil war, or war-like conditions exist. In this context, DOD federal civilian personnel who are deployed to Afghanistan and Iraq are eligible to receive post (hardship) differential and danger pay, each equivalent to 35 percent of their base salaries. In contrast, military personnel receive monthly pays of $100 for hardship duty and $225 for imminent danger. However, some special pays are unique to each group. For example, to partially reimburse those who are involuntarily separated from their dependents, military personnel are eligible to receive a family separation allowance that is not available to deployed DOD federal civilian personnel. Additionally, unlike DOD federal civilian personnel, military personnel also receive a combat zone tax exclusion while deployed to Afghanistan and Iraq that excludes certain income from federal taxes. DOD federal civilian personnel, by contrast, are eligible for a variety of premium pays, such as overtime and night differential, that are not available to military personnel. Although DOD federal civilian and military personnel generally receive various special pays to compensate them for conditions of deployment, in certain scenarios that we modeled, the overall amounts of compensation payments were higher for DOD federal civilian personnel than for military personnel, as illustrated in tables 6 and 7. In the event of sustaining an injury while deployed, DOD federal civilian and military personnel are eligible to receive two broad categories of disability benefits—disability compensation and disability retirement. However, the benefits applicable to each group vary by type and amount, depending on specific program provisions and individual circumstances. Within these broad categories, there are three main types of disability: (1) temporary disability, (2) permanent partial disability, and (3) permanent total disability. Both DOD federal civilian and military personnel who are injured in the line of duty are eligible to receive continuation of their pay during the initial period of treatment and may be eligible to receive recurring payments for lost wages. However, the payments to DOD federal civilian personnel are based on their salaries and whether the employee has any dependents, regardless of the number, which can vary significantly, whereas disability compensation payments made by the Department of Veterans Affairs (VA) to injured military personnel are based on the severity of the injury and their number of dependents. DOD federal civilian personnel are eligible to receive continuation of pay (salary) for up to 45 days, followed by a recurring payment for wage loss which is based on a percentage of salary and whether they have any dependents, up to a cap. In contrast, military personnel receive continuation of pay of their salary for generally no longer than a year, followed by a recurring VA disability compensation payment for wage loss that is based on the degree of disability and their number of dependents, and temporary DOD disability retirement for up to 5 years. Appendix II provides additional information on temporary disability compensation payments for federal civilian and military personnel. To illustrate the way in which the degree of impairment and an individual’s salary can affect temporary disability compensation, in our April 2006 review, we compared the disability benefits available to military personnel with those available to comparable civilian public safety officers at the federal, state, and local levels. We found that VA compensation payments for military personnel were based on a disability rating, regardless of salary level; in contrast, compensation payments for civilian public safety officers were based on salary level, regardless of disability level. Thus, for an individual with severe injuries and relatively low wages, VA compensation payments for military personnel were generally higher than those of the civilian public safety officers included in the reviews. However, if an individual had less severe injuries and high wages, VA compensation payments for military personnel were generally lower than those of the civilian public safety officers included in the review. When a partial disability is determined to be permanent, DOD federal civilian and military personnel can continue to receive recurring compensation payments. For DOD federal civilian personnel, these payments are provided for the remainder of life as long as the impairment persists, and can vary significantly depending upon the salary of the individual and the existence of dependents. Military personnel are also eligible to receive recurring VA disability compensation payments for the remainder of their lives, and these payments are based on the severity of the servicemember’s injury and the number of dependents. In addition, both groups are eligible to receive additional compensation payments beyond the recurring payments just discussed, based on the type of impairment. DOD federal civilians with permanent partial disabilities receive a schedule of payments based on the specific type of impairment (sometimes referred to as a schedule award). Some impairments may result in benefits for a few weeks, while others may result in benefits for several years. Similarly, military personnel receive special monthly VA compensation payments depending on the specific type and degree of impairment. Appendix II provides more detailed information on permanent partial disability compensation payments for DOD federal civilian and military personnel. Our April 2006 review compared the compensation benefits available to military personnel with those available to federal civilian public safety officers, among others, using several scenarios. Our analysis showed that when able to return to duty, military personnel often received a greater amount of compensation benefits over a lifetime than did civilians, even when the monthly benefit payment was substantially lower and receipt of benefits was delayed for several years. Permanent partial disabilities that prevent civilian and military personnel from returning to duty in their current jobs may entitle them to receive disability retirement benefits based on a percentage of salary in addition to compensation benefits; however, the eligibility criteria and benefit amounts differ. Under the Civil Service Retirement System (CSRS), DOD federal civilian personnel must be unfit for duty and have 5 years of service to qualify for disability retirement benefits. Under the Federal Employees’ Retirement System (FERS), civilian personnel must be unfit for duty and have 18 months of service. DOD federal civilian personnel must elect either compensation benefits or disability retirement. Military personnel who are unfit for duty are eligible for DOD disability retirement benefits if they have a disability rating of 30 percent or more regardless of length of service, or if they have 20 years or more of service regardless of disability rating. The amount of the DOD disability retirement payment is offset dollar for dollar, however, by the amount of the monthly VA disability compensation payment unless they have at least 20 years of service and a disability rating of 50 percent or more, or combat-related disabilities. Our April 2006 review of disability benefits showed that when military personnel and federal civilian public safety officers were unable to return to duty due to a permanent partial disability, such as a leg amputation, the combined compensation and retirement benefits provided to the military personnel over a lifetime were sometimes more, and sometimes less, than the combined benefits provided to civilian public safety officers. When an injury is severe enough to be deemed permanent and total, DOD federal civilian and military personnel may receive similar types of benefits such as disability compensation and retirement payments; however, the amounts paid to each group vary. For civilian personnel, the monthly payment amounts for total disability are generally similar to those for permanent partial disability described earlier, but unlike with permanent partial disabilities, the payments do not take into account any wage earning capacity. Both groups are eligible to receive additional compensation payments beyond the recurring payments that are similar to those for permanent partial disability. DOD federal civilians with permanent disabilities receive a schedule award based on the specific type of impairment. In addition, DOD federal civilian personnel may be eligible for an additional attendant allowance—up to $1,500 per month during 2006—if such care is needed. Military personnel receive special monthly VA compensation payments for particularly severe injuries, such as amputations, blindness, or other loss of use of organs and extremities. The payments are designed to account for attendant care or other special needs deriving from the disability. In addition to disability compensation, both DOD federal civilian and military personnel have access to disability retirement benefits for permanent total disabilities. The provisions for election and offset of disability compensation and disability retirement benefits in cases of permanent total disability are similar to provisions in cases of permanent partial disability discussed earlier. Another benefit available to DOD federal civilian and military personnel with permanent total disabilities is Social Security Disability Insurance (SSDI). SSDI benefits are available to individuals who incur a physical or mental impairment that prevents them from performing substantial gainful activity and that is expected to last at least 1 year or to result in death. The benefit is based on the employee’s earnings history and lifetime contributions to Social Security; therefore, the benefit amounts vary widely among individuals. DOD federal civilian personnel covered by FERS and military personnel pay into Social Security and thus may be eligible to receive SSDI benefits. The maximum benefit to both groups in 2006 was $2,053 per month. However, DOD federal civilian personnel must choose between either compensation payments and SSDI benefits or have their disability retirement payments reduced when receiving SSDI benefits. Survivors of deceased DOD federal civilian and military personnel generally receive similar types of cash survivor benefits—either as a lump sum, a recurring payment, or both—through comparable sources. However, the benefit amounts generally differ for each group. Survivors of DOD federal civilian and military personnel also receive noncash benefits that differ in type and amounts. As shown in table 8, survivors of deceased DOD federal civilian and military personnel both receive lump sum benefits in the form of Social Security, a death gratuity, burial expenses, and life insurance. Social Security provides $255 upon the death of a DOD federal civilian employee or military member. In addition, survivors of deceased DOD federal civilian personnel receive a death gratuity of up to $10,000, while survivors of deceased military personnel receive $100,000. The payment for funeral expenses provided to survivors of deceased DOD federal civilian personnel can be as high as $800, plus $200 for costs associated with terminating employee status, while it can be $7,700 for deceased military personnel. Life insurance is another common source of benefits for the survivors of many deceased civilian and military personnel. Survivors of deceased federal civilian personnel receive a payment equal to the civilian’s rate of basic pay, rounded to the nearest thousand, plus $2,000. Military personnel automatically are insured as part of the Servicemembers’ Group Life Insurance for up to $400,000, unless they elect less or no coverage. DOD federal civilian employees also receive a survivor benefit in their retirement plans. Survivors of deceased DOD federal civilian and military personnel are also eligible for recurring benefits, some of which are specific to each group, as shown in table 9. Survivors of both deceased DOD federal civilian and military personnel may be eligible to receive recurring Social Security payments based on the deceased individual’s earnings in a covered period. However, other types of recurring payments are specific to either civilian or military personnel. For example, survivors of DOD federal civilian personnel may receive recurring payments from a retirement plan or workers’ compensation if the death occurred while in the line of duty. Survivors of deceased military personnel also receive payments through the Survivor Benefit Plan, Dependency and Indemnity Compensation, or both. In addition to lump sum and recurring benefits, survivors of deceased DOD federal civilians and military personnel receive noncash benefits. As shown in table 10, survivors of deceased military personnel receive more noncash benefits than do those of deceased DOD federal civilian personnel, with few benefits being comparable in type. For example, eligible survivors of military personnel who die while on active duty obtain benefits such as rent-free government housing or tax- free housing allowances for up to 365 days, relocation assistance, and lifetime access to commissaries and exchanges that are not available to civilian personnel who die in the line-of-duty. However, survivors of both deceased DOD federal civilian and military personnel do continue to receive health insurance that is wholly or partially subsidized. As DOD’s federal civilian employees assume an expanding role in helping the department support its contingency operations overseas, the need for attention to the policies and benefits that affect the health and welfare of these individuals becomes increasingly significant. DOD currently has important policies in place that relate to the deployment of its federal civilians. However, it lacks an adequate oversight and quality assurance mechanism to ensure compliance and quality of service. Thus, not all of its policies—such as those that define the department’s requirements for force health protection and surveillance—are being fully implemented by the DOD components. Until DOD improves its oversight in this area, it will jeopardize its ability to be effectively informed of the extent to which its federal civilians are screened and deemed medically fit to deploy in support of contingency operations; deployed civilian personnel receive needed immunizations to counter theater disease threats; and what medical follow-up attention federal civilians require for health problems or concerns that may arise following their deployment. To strengthen DOD’s force health protection and surveillance for its federal civilian personnel who deploy in support of contingency operations, we recommend that the Secretary of Defense direct the Office of the Under Secretary of Defense for Personnel and Readiness to establish an oversight and quality assurance mechanism to ensure that all components fully comply with its requirements. In written comments on a draft of this report, DOD partially concurred with our recommendation. The department acknowledged the necessity for all deployed civilians to receive required medical assessments and immunizations, and that documentation must be available in every instance. The department outlined several steps it intends to take to determine appropriate implementation of our recommendation. Specifically, the department stated that it has written and coordinated a new DOD instruction, scheduled to become effective before the end of 2006, that establishes a comprehensive DOD force health protection quality assurance program that will apply to DOD civilian personnel accompanying deploying military forces. While DOD’s response is encouraging, we remain concerned that the department’s description of the actions it plans to take to assess the components’ compliance with its requirements lacks sufficient detail. DOD was unable to provide us with a copy of the new instruction; thus, we could not evaluate the comprehensiveness of its new force health protection quality assurance program or determine whether the program identifies specific actions the department plans to take for assessing and ensuring the full implementation of the force health protection and surveillance requirements by all DOD components. DOD also stated that proposed revisions to its directives and instructions that address the planning, preparation, and utilization of DOD civilians include, among other things, annual assessments for compliance with pre-and post-deployment medical assessment requirements. However, the department did not describe what actions, if any, it plans to take to ensure that it will be sufficiently informed of the extent to which its components are complying with existing health protection requirements for its deployed federal civilians. In the absence of more specific details on its planned actions, we continue to emphasize the department’s need for a comprehensive oversight and quality assurance mechanism without which it will not be effectively positioned to ensure compliance with its policies, or ensure the health care and protection of its deployed federal civilians as they continue to support contingency operations. In addition to its comments on our recommendation, the department took issue with some of our specific findings. DOD questioned our findings that in many cases DOD components were unable to produce documentation confirming that deployed federal civilians had received necessary pre- or post-deployment medical assessments, or immunizations. The department stated that DOD activities, particularly regarding the Army Corps of Engineers, Transatlantic Programs Center (TPC), had determined that documentation did exist for many records included in our review, thus raising reservations about our findings. In particular, the department stated that the number (and percent) of records missing two or more immunizations that we reported for TPC was inaccurate. It stated that based on TPC’s review of the specific documentation that we used to support our findings, we had actually identified 69 records (54.3 percent) as missing two or more immunizations, rather than 85 (66.9 percent) noted in our draft report. We disagree. TPC overlooked 16 records included in our review that lacked documentation of any immunizations. Moreover, as we noted in our report, to provide assurances that the results of our review of hard copy deployment records at the selected component locations were accurate, we requested that each component’s designated medical personnel reexamine those deployment records that we determined were missing required health documentation. We then adjusted our results in those instances where documentation was subsequently provided. To provide additional assurances regarding our determinations, we requested that each component’s designated medical personnel review and sign the data collection instrument that we used to collect deployment health information from each individual civilian’s deployment record attesting to our conclusions regarding the existence of health assessment or immunization documentation. DOD also stated that we inappropriately mixed discussion of Veterans Affairs and DOD benefits without distinguishing between the two. However, our report appropriately discusses two broad categories of “government-provided” benefits: (1) those provided by DOD and (2) those provided by VA. Nonetheless, to further clarify this section of our report, we added “VA” and “DOD” to our discussions of disability compensation and retirement benefits for military personnel. DOD also stated that our discussion of military disability benefits presented incorrect information in many cases, indicating that our statements that compensation payments for military personnel were based on a disability rating, regardless of salary level is only true with regard to VA disability benefits. DOD also stated that DOD disability payments do, in fact, take into account salary level, and that if a former member is entitled to both, there is an offsetting mechanism. We agree. As we state in our report, under veterans’ compensation programs, benefits typically include cash payments to replace a percentage of the individual’s loss in wages while injured and unable to work. We also state that disability retirement benefits for military personnel are based on a percent of salary in addition to compensation benefits, and that the amount of retirement payment is offset dollar for dollar by the amount of monthly compensation payment unless military personnel have at least 20 years of service and a disability rating of 50 percent or more, or have combat-related disabilities. Further, DOD submitted detailed comments related to our analysis of special pays and benefits provided to deployed DOD federal civilian and military personnel. In particular, the department stated that our selection and presentation of the associated data on the special pays and benefits provided to DOD federal civilian and military personnel could easily mislead the reader into drawing erroneous conclusions. The department also stated that our comparisons did not take into account the relative value of certain key benefits for which explicit dollar amounts cannot be measured, such as retirement systems, health care systems, and military commissary exchange privileges. To the contrary, our report did discuss this limitation, and as is the case with any modeled scenarios based on certain assumptions, some of the factors with the potential to affect the overall outcomes of our comparisons could not be included because of, as DOD pointed out, the relative value of certain key benefits for which explicit dollar amounts cannot be measured. It is partly for this reason that we acknowledged in the report that we do not take a position on the adequacy or appropriateness of the special pays and benefits provided to DOD federal civilian and military personnel. DOD also requested that we clearly acknowledge the fundamental differences between the military and civilians systems. We believe that we have done so. As we noted in our report, we did not make direct analytical comparisons between compensation and benefits offered by DOD to deployed federal civilian and military personnel because such comparisons must account for the demands of the military service, such as involuntary relocation, frequent and lengthy separations from family, and liability for combat. DOD provided other technical comments, which we have incorporated as appropriate. The department’s comments are reprinted in their entirety in appendix III. We are sending copies of this report to the Chairman and Ranking Minority Member, Senate Committee on Armed Services; the Chairman and Ranking Minority Member, House Committee on Armed Services; the Chairman and Ranking Minority Member, Subcommittee on Defense, Senate Committee on Appropriations; and the Chairman and Ranking Minority Member, Subcommittee on Defense, House Committee on Appropriations; and other interested congressional parties. We are also sending copies to the Secretary of Defense and the Under Secretary of Defense for Personnel and Readiness. We will make copies available to other interested parties upon request. Copies of this report will also be made available at no charge on GAO’s Web site at http://www.gao.gov. Should you or your staff have any questions about this report, please contact me at (202) 512-6304 or by e-mail at melvinv@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. To assess the extent to which DOD has established force health protection and surveillance policies for DOD federal civilians who deploy outside of the United States in support of contingency operations, and how the components (military services and the Defense Contract Management Agency) have implemented those policies, we reviewed pertinent force health protection and surveillance policies and discussed these policies with the following offices or commands: U.S. Central Command; Joint Chiefs of Staff, Manpower and Personnel; Under Secretary of Defense for Personnel and Readiness (including the Assistant Secretary of Defense for Health Affairs, Deployment Health Support Directorate; Civilian Personnel Policy; and Civilian Personnel Management Services); the Surgeons General for the Army, Navy, and Air Force; and the Defense Contract Management Agency (DCMA). Our review focused on DOD federal civilians who (1) deployed to Afghanistan or Iraq for 30 continuous days or more between June 1, 2003, and September 30, 2005, and (2) returned to the United States by February 28, 2006. Because DOD had difficulty identifying the total number of federal civilians who deployed to Afghanistan or Iraq, we assessed the implementation of DOD’s deployment health requirements at eight component locations that were selected using a number of approaches. Given that DOD components have flexibility in where they conduct deployment processing, we selected locations for our review accordingly. Specifically, the Army uses a centralized approach, deploying its federal civilians at three primary locations; therefore, we selected all three locations for review. By contrast, the Navy and Air Force use a decentralized approach, deploying their federal civilians from their home stations. For these components, we selected five locations based on data that indicated that these locations had deployed the largest numbers of federal civilian personnel. DCMA was included in our review because it had deployed the largest number of federal civilian personnel compared to other defense agencies. DCMA has an informal agreement with the Army to process its federal civilians through two of the Army’s three deployment locations. Therefore, DCMA federal civilian deployment data in this report are included in the Army results to the extent that DCMA federal civilian deployments were documented at the two relevant Army locations. At all eight component locations, we reviewed either all available hard copy or electronic deployment records, or in one instance, a sample of the deployment records for deployed federal civilian personnel who met our criteria above. Table 11 shows the locations included in our review and the number of deployment records reviewed at each location. In total, we reviewed 3,431 hard copy and automated records for federal civilian personnel who deployed to Afghanistan and Iraq. Specifically, we reviewed hard copies of deployment records for 454 (out of a reported 822) federal civilian personnel at seven component locations and automated deployment records for 2,977 (out of the reported 2,977) federal civilian personnel at the other location where all deployment records were being maintained electronically. The results of deployment record reviews, however, could not be projected beyond the samples to all DOD federal civilians who had deployed during this time frame. To facilitate our review of federal civilian deployment records at the selected component locations, we developed a data collection instrument to review and collect deployment health information from each individual civilian’s deployment record. For federal civilians in our review at each location, we reviewed deployment records for documentation that the following force health protection and surveillance policy requirements were met: Pre-and post-deployment health assessments; Tuberculosis screening test (within 1 year of deployment); Human Immunodeficiency Virus (HIV) screening test; Pre-deployment immunizations: hepatitis A (first and second course); influenza (within 1 year of deployment); tetanus-diphtheria (within 10 years of deployment); typhoid; and smallpox (within 5 years of deployment) After our review of hard copy deployment records, we requested each component’s medical personnel to reexamine those hard copy deployment records that were missing required health documentation, and we adjusted our results where documentation was subsequently provided. We also requested and queried other documentation from information systems used by the components to capture deployment and related health information, making adjustments to our results where documentation was found in the systems. These data sources included the Army’s Medical Protection System (MEDPROS), the Army’s medical database (MedBase), the Air Force’s Preventive Health Assessment and Individual Medical Readiness (PIMR) system and its Comprehensive Immunization Tracking Application (CITA), DOD’s Defense Enrollment Eligibility Reporting System (DEERS), which is used by the Navy, and the Army Medical Surveillance Activity’s Defense Medical Surveillance System (DMSS). At the Army’s Fort Benning, we created a sampling frame (i.e., total population) of records for 606 federal civilian deployments between June 1, 2003, and September 30, 2005. Our study population was limited to DOD federal civilians who deployed to Afghanistan or Iraq. We then drew a stratified random sample of 288 deployment records and stratified the sample to isolate potential duplicate deployment records for the same federal civilian. We found two duplicate records and removed them from both the population and sample, as shown in table 12. We also removed another 14 deployment records from our sample because those DOD federal civilians had been deployed to locations other than Afghanistan or Iraq, and were not eligible for the duty population. In addition, we removed another 13 deployment records that were originally selected as potential replacement records; however, we found that those replacements were not needed. Ultimately, we identified 238 in-scope responses, for a weighted response rate of 87 percent. Each sampled record was subsequently weighted in the analysis to represent all DOD federal civilians deployed to Afghanistan or Iraq. The disposition of the federal civilian deployment records we reviewed at Fort Benning are summarized in the following table: Our probability sample is only one of a large number of samples that we might have drawn. Because each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the Fort Benning, Ga., samples we could have drawn. All percentage estimates from our sample have margins of error (that is, widths of confidence intervals) of plus or minus 5 percentage points or less, at the 95 percent confidence level, unless otherwise noted. We took steps to assess the reliability of DOD federal civilian deployment and health data for the purposes of this review, including consideration of issues such as the completeness of the data from the respective information systems’ program managers and administrators. We also examined whether the data were subjected to quality control measures such as periodic testing of the data against deployment records to ensure the accuracy and reliability of the data. In addition, we reviewed existing documentation related to the data sources and interviewed knowledgeable agency officials about the data. We did not find these deployment and health data to be sufficiently reliable for (1) identifying the universe of DOD federal civilian deployments or (2) use as the sole source for reviewing the health and immunization information for all DOD federal civilian deployments, but we found the information systems to be sufficiently reliable when used as one of several sources in our review of deployment records. In those instances where we did not find a deployment health assessment or immunization in either the deployment records or in the electronic data systems, we concluded that the health assessment or immunization was not documented. To determine the extent to which DOD has established and the components have implemented medical treatment policies for DOD federal civilians who deployed in support of contingency operations, we examined pertinent medical treatment policies for DOD federal civilian employees who required treatment for injuries and diseases sustained while supporting contingency operations. In addition, we obtained workers’ compensation claims filed by DOD federal civilian personnel with the Department of Labor’s Office of Workers’ Compensation Programs(OWCP) showing those civilians who sustained injuries and diseases during deployment. We selected and reviewed a non-probability sample of claims to assess the components’ processes and procedures for implementing DOD’s medical treatment policies across a range of civilian casualties including injuries, physical and mental illnesses, and diseases. The scope of our review did not extend to the Department of Labor’s claims review process. To identify special pays and benefits provided to DOD federal civilians who deployed in support of contingency operations and to assess the extent that special pays and benefits differ from those provided to deployed active duty military personnel, we examined major statutory provisions for special pays, disability and death benefits for federal civilians and military personnel, including relevant chapters of Title 5 of the U.S. Code governing federal civilian personnel management; relevant chapters of Title 10 of the U.S. Code governing armed forces personnel management; Section 112 of Title 26 of the U.S. Code governing combat zone tax exemption; relevant chapters of Title 37 of the U.S. Code governing pay and allowances for the uniformed services; relevant chapters of Title 38 of the U.S. Code governing veterans’ benefits; relevant provisions of applicable public laws governing military and civilian pay and benefits; applicable directives and instructions related to active duty military and DOD federal civilian benefits and entitlements; DOD financial management regulations; Department of State regulations; and prior GAO reports. In addition, we discussed the statutes and guidance with cognizant officials of the Office of the Under Secretary of Defense for Personnel and Readiness, military services’ headquarters, and the Defense Contract Management Agency involved with the administration of active duty and federal civilian personnel entitlements. We did not perform a comprehensive review of all compensation—comprised of a myriad of pays and benefits—offered to active duty military and federal civilian personnel in general. Our analysis focused on selected elements of compensation such as special pays (e.g., hostile fire/imminent danger pay). Also, we did not make direct analytical comparisons between compensation and benefits offered by DOD to deployed federal civilian and military personnel because such comparisons must account for the demands of the military service, such as involuntary relocation, frequent and lengthy separations from family, and liability for combat. After reviewing documents and interviewing officials, we then compiled and analyzed the information on the types and amounts of special pays and benefits available to active duty military and DOD federal civilian personnel who deployed to Afghanistan or Iraq. We interviewed DOD officials to discuss the basis for any differences in compensation. In addition, to illustrate how special pays affect overall compensation provided to DOD federal civilian and military personnel, we modeled scenarios for both groups using similar circumstances, such as length of deployment, pay grades, special pays (e.g., post differential pay, danger pay, overtime pay, family separation allowance, basic allowance for housing, basic allowance for subsistence), and duty location. Through discussions with senior DOD officials, we made an assumption that deployed DOD federal civilians worked 30 hours of overtime per week. For deployed DOD federal civilians, we subtracted a contribution of $15,000 to the Thrift Savings Plan (TSP) to obtain the adjusted gross income. We assumed that DOD federal civilians, temporarily at a higher tax bracket, would take maximum advantage of the opportunity to defer taxes. We assumed that the military personnel would contribute a smaller percentage of pay, 5 percent of gross income, to TSP. We made this assumption because much of the military pay was not subject to federal taxes, which removes the incentive to contribute to TSP, and because unlike for federal workers, military TSP does not have a matching component. For military personnel, we also deducted the amount of pay not subject to taxes due to the combat zone exclusion, family separation allowance, basic allowance for subsistence, and basic allowance for housing. Using these assumptions, we generated an adjusted gross income and used that as input into a commercial tax program, Turbo Tax, to obtain federal taxes owed. We assumed that both DOD federal civilian and military personnel were married, filing jointly, with a spouse that earned no income. We assumed that the family had two children and qualified for two child tax credits, and the Earned Income Tax Credit, if at that income level. This resulted in four exemptions and a standard deduction of $10,000 in 2005. For purposes of validation, we repeated this exercise using an alternate tax program, Tax Cut, and obtained identical results. We conducted our review from March 2006 to August 2006 in accordance with generally accepted government auditing standards. Both DOD federal civilian and military personnel are eligible to receive disability benefits when they sustain a line-of-duty injury. However, these benefits vary in amount. Table 13 shows the temporary disability benefits available to eligible DOD federal civilian and military personnel. As table 13 shows, DOD federal civilians who are injured in the line of duty are eligible to receive continuation of their salary up to 45 days, followed by a recurring payment for wage loss that is based on a percentage of their salary and the existence of dependents, up to a cap. In contrast, military personnel receive continuation of their salaries for generally no longer than a year, followed by a recurring payment for wage loss, which is based on the degree of disability and their number of dependents, and temporary retirement pay based on salary for up to 5 years. When a partial disability is determined to be permanent, both DOD federal civilians and military personnel are eligible to continue receiving recurring compensation payments, but again, the amounts of these benefits vary, as shown in table 14. As table 14 shows, DOD federal civilian personnel with permanent partial disabilities receive payments based on salary and dependents while military personnel receive payments based on the severity of the injury and their number of dependents, as long as the condition persists. In addition to the contact named above, Sandra Burrell, Assistant Director; William Bates; Dr. Benjamin Bolitzer; Alissa Czyz; George Duncan; Steve Fox; Dawn Godfrey; Nancy Hess; Lynn Johnson; Barbara Joyce; Dr. Ronald La Due Lake; William Mathers; Paul Newton; Dr. Charles Perdue; Jason Porter; Julia Matta; Susan Tieh; John Townes; and Dr. Monica Wolford made key contributions to this report.
As the Department of Defense (DOD) has expanded its involvement in overseas military operations, it has grown increasingly reliant on its federal civilian workforce to support contingency operations. The Senate Armed Services Committee required GAO to examine DOD's policies concerning the health care for DOD civilians who deploy in support of contingency operations in Afghanistan and Iraq. GAO analyzed over 3,400 deployment-related records for deployed federal civilians and interviewed department officials to determine the extent to which DOD has established and the military services and defense agencies (hereafter referred to as DOD components) have implemented (1) force health protection and surveillance policies and (2) medical treatment policies and procedures for its deployed federal civilians. GAO also examined the differences in special pays and benefits provided to DOD's deployed federal civilians and military personnel. DOD has established force health protection and surveillance policies to assess and reduce or prevent health risks for its deployed federal civilian personnel, but it lacks procedures to ensure implementation. Our review of over 3,400 deployment records at eight component locations found that components lacked documentation that some federal civilian personnel who deployed to Afghanistan and Iraq had received, among other things, required pre- and post-deployment health assessments and immunizations. These deficiencies were most prevalent at Air Force and Navy locations, and one Army location. As a larger issue, DOD lacked complete and centralized data to readily identify its deployed federal civilians and their movement in theater, further hindering its efforts to assess the overall effectiveness of its force health protection and surveillance capabilities. In August 2006, DOD issued a revised policy which outlined procedures that are intended to address these shortcomings. However, these procedures are not comprehensive enough to ensure that DOD will know the extent to which its components are complying with existing health protection requirements. In particular, the procedures do not establish an oversight and quality assurance mechanism for assessing the implementation of its force health protection and surveillance requirements. Until DOD establishes a mechanism to strengthen its force health protection and surveillance oversight, it will not be effectively positioned to ensure compliance with its policies, or the health care and protection of deployed federal civilians. DOD has also established medical treatment policies for its deployed federal civilians which provide those who require treatment for injuries or diseases sustained during overseas hostilities with care that is equivalent in scope to that provided to active duty military personnel under the DOD military health system. GAO reviewed a sample of seven workers' compensation claims (out of a universe of 83) filed under the Federal Employees' Compensation Act by DOD federal civilians who deployed to Iraq. GAO found in three cases where care was initiated in theater, that the affected civilians had received treatment in accordance with DOD's policies. In all seven cases, DOD federal civilians who requested care after returning to the United States had, in accordance with DOD's policies, received medical examinations and/or treatment for their deployment-related injuries or diseases through either military or civilian treatment facilities. DOD provides certain special pays and benefits to its deployed federal civilians, which generally differ in type and/or amount from those provided to deployed military personnel. For example, both civilian and military personnel are eligible to receive disability benefits for deployment-related injuries; however, the type and amount of these benefits vary, and some are unique to each group. Further, while the survivors of deceased federal civilian and military personnel generally receive similar types of cash survivor benefits, the comparative amounts of these benefits differ.
Most income derived from private sector business activity in the United States is subject to federal corporate income tax, the individual income tax, or both. The tax treatment that applies to a business depends on its legal form of organization. Firms that are organized under the tax code as “C” corporations (which include most large, publicly held corporations) have their profits taxed once at the entity level under the corporate income tax (on a form 1120) and then a second time under the individual income tax when profits are transferred to individual shareholders in the form of dividends or realized capital gains. Firms that are organized as “pass-through” entities, such as partnerships, limited liability companies, and “S” corporations are generally not taxed at the entity level; however, their net incomes are passed through each year and taxed in the hands of their partners or shareholders under the individual income tax (as part of those taxpayers’ form 1040 filing). Similarly, income from businesses that are owned by single individuals enters into the taxable incomes of those owners under the individual income tax and is not subject to a separate entity-level tax. The base of the federal corporate income tax includes net income from business operations (receipts, minus the costs of purchased goods, labor, interest, and other expenses). It also includes net income that corporations earn in the form of interest, dividends, rent, royalties, and realized capital gains. The statutory rate of tax on net corporate income ranges from 15 to 35 percent, depending on the amount of income earned. The United States taxes the worldwide income of domestic corporations, regardless of where the income is earned, with a foreign tax credit for certain taxes paid to other countries. However, the timing of the tax liability depends on several factors, including whether the income is from a U.S. or foreign source and, if it is from a foreign source, whether it is earned through direct operations or through a subsidiary. The base of the individual income tax covers business-source income paid to individuals, such as dividends, realized net capital gains on corporate equity, and income from self-employment. The statutory rates of tax on net taxable income range from 10 percent to 35 percent. Lower rates (generally 5 percent and 15 percent, depending on taxable income) apply to long-term capital gains and dividend income. Sole proprietors also pay both the employer and employee shares of social insurance taxes on their net business income. Generally, a U.S. citizen or resident pays tax on his or her worldwide income, including income derived from foreign-source dividends and capital gains subject to a credit for foreign taxes paid on such income. Three long-standing criteria—economic efficiency, equity, and a combination of simplicity, transparency and administrability—are typically used to evaluate tax policy. These criteria are often in conflict with each other, and as a result, there are usually trade-offs to consider and people are likely to disagree about the relative importance of the criteria. Specific aspects of business taxes can be evaluated in terms of how they support or detract from the efficiency, equity, simplicity, transparency, and administrability of the overall tax system. To the extent that a tax system is not simple and efficient, it imposes costs on taxpayers beyond the payments they make to the U.S. Treasury. As shown in figure 1, the total cost of any tax from a taxpayer’s point of view is the sum of the tax liability, the cost of complying with the tax system, and the economic efficiency costs that the tax imposes. In deciding on the size of government, we balance the total cost of taxes with the benefits provided by government programs. A complete evaluation of the tax treatment of businesses, which is a critical element of our overall federal tax system, cannot be made without considering how business taxation interacts with and complements the other elements of the overall system, such as the tax treatment of individuals and excise taxes on selected goods and services. This integrated approach is also appropriate for evaluating reform alternatives, regardless of whether those alternatives take the form of a simplified income tax system, a consumption tax system, or some combination of the two. Businesses contribute significant revenues to the federal government, both directly and indirectly. As figure 2 shows, corporate businesses paid $278 billion in corporate income tax directly to the federal government in 2005. Individuals earn income from business investment in the form of dividends and realized capital gains from C corporations; income allocations from partnerships and S corporations; entrepreneurial income from their own sole proprietorships; and rents and royalties. In recent years this business- source income, which is all taxed under the individual income tax, has amounted to between roughly 14 percent and 19 percent of the income of individuals who have paid individual income tax. In addition to the taxes that are paid on business-source income, most of the remainder of federal taxes is collected and passed on to the government by businesses. Business tax revenues of the magnitude discussed make them very relevant to considerations about how to address the nation’s long-term fiscal imbalance. Over the long term, the United States faces a large and growing structural budget deficit primarily caused by demographic trends and rising health care costs as shown in figure 3, and exacerbated over time by growing interest on the ever-larger federal debt. Continuing on this imprudent and unsustainable fiscal path will gradually erode, if not suddenly damage, our economy, our standard of living, and ultimately our national security. We cannot grow our way out of this long-term fiscal challenge because the imbalance between spending and revenue is so large. We will need to make tough choices using a multipronged approach: (1) revise budget processes and financial reporting requirements; (2) restructure entitlement programs; (3) reexamine the base of discretionary spending and other spending; and (4) review and revise tax policy, including tax expenditures, and tax enforcement programs. Business tax policy, business tax expenditures, and business tax enforcement need to be part of the overall tax review because of the amount of revenue at stake. Business tax expenditures reduce the revenue that would otherwise be raised from businesses. As already noted, to reduce their tax liabilities, businesses can take advantage of preferential provisions in the tax code, such as exclusions, exemptions, deductions, credits, preferential rates, and deferral of tax liability. Tax preferences—which are legally known as tax expenditures—are often aimed at policy goals similar to those of federal spending programs. For example, there are different tax expenditures intended to encourage economic development in disadvantaged areas and stimulate research and development, while there are also federal spending programs that have similar purposes. Also, by narrowing the tax base, business tax expenditures have the effect of raising either business tax rates or the rates on other taxpayers in order to generate a given amount of revenue. The design of the current system of business taxation causes economic inefficiency and is complex. The complexity provides fertile ground for noncompliance and raises equity concerns. Our current system for taxing business income causes economic inefficiency because it imposes significantly different effective rates of tax on different types of investments. Tax treatment that is not neutral across different types of capital investment causes significant economic inefficiency by guiding investments to lightly taxed activities rather than those with high pretax productivity. However, the goal of tax policy is not to eliminate efficiency costs. The goal is to design a tax system that produces a desired amount of revenue and balances economic efficiency with other objectives, such as equity, simplicity, transparency, and administrability. Every practical tax system imposes efficiency costs. There are some features of current business taxation that have attracted criticism by economists and other tax experts because of efficiency costs. My point in raising them here is not that these features need to be changed—that is a policy judgment for Congress to make as it balances various goals. Rather, my point is that these economic consequences of tax policy need to be considered as we think about reform. The following are among the most noted cases of nonneutral taxation in the federal business tax system: Income earned on equity-financed investments made by C corporations is taxed twice—under both the corporate and individual income taxes, whereas no other business income is taxed more than once. Moreover, even noncorporate business investment is taxed more heavily than owner-occupied housing—a form of capital investment that receives very preferential treatment. As a result, resources have been shifted away from higher-return business investment into owner-occupied housing, and, within the business sector, resources have been shifted from higher-return corporations to noncorporate businesses. Such shifting of investment makes workers less productive than they would be under a more neutral tax system. This results in employees receiving lower wages because increases in employee wages are generally tied to increases in productivity. As noted above, such efficiency costs may be worth paying in order to meet other policy goals. For example, many policymakers advocate increased homeownership as a social policy goal. Depreciation allowances under the tax code vary considerably in generosity across different assets causing effective tax rates to vary and, thereby, favoring investment in certain assets over others. For example, researchers have found that the returns on most types of investments in equipment are taxed more favorably than are most investments in nonresidential buildings. These biases shift resources away from some investments in buildings that would have been more productive than some of the equipment investments that are being made instead. Tax rules for corporations favor the use of debt over shareholder equity as a source of finance for investment. The return on debt- financed investment consists of interest payments to the corporation’s creditors, which are deductible by the corporations. Consequently, that return is taxed only once—in the hands of the creditors. In contrast, the return on equity-financed investment consists of dividends and capital gains, which are not deductible by the corporation. These forms of income that are taxed under the individual tax are paid out of income that has already been subject to the corporate income tax. The bias against equity finance induces corporations to have less of an “equity cushion” against business downturns. Capital gains on corporate equity are taxed more favorably than dividends because that tax can be deferred until the gains are realized (typically when shareholders sell their stock). This bias against dividend payments likely means that more profits are retained within corporations than otherwise would be the case and, therefore, the flow of capital to its most productive uses is being constrained. The complex set of rules governing U.S. taxation of the worldwide income of domestic corporations (those incorporated in the United States) leads to wide variations in the effective rate of tax paid on that income, based on the nature and location of each corporation’s foreign operations and the effort put into tax planning. In effect, the active foreign income of some U.S. corporations is taxed more heavily than if the United States followed the practice of many other countries and exempted such income from tax. However, other U.S. corporations are able to take advantage of flexibilities in the U.S. tax rules in order to achieve treatment that is equivalent to or, in some cases, more favorable than the so-called “territorial” tax systems that exempt foreign-source active business income. As a consequence, some U.S. corporations face a tax disadvantage, while others have an advantage, relative to foreign corporations when competing in foreign countries. Those U.S. corporations that have a disadvantage are likely to locate a smaller share of their investment overseas than would be the case in a tax-free world; the opposite is true for those U.S. corporations with the tax advantage. Moreover, the tax system encourages U.S. corporations to alter their cash-management and financing decisions (such as by delaying the repatriation of profits) in order to reduce their taxes. The taxation of business income is part of the broader taxation of income from capital. The taxation of capital income in general (even when that taxation is uniformly applied) causes another form of inefficiency beyond the inefficiencies caused by the aforementioned cases of differential taxation across types of investments. This additional inefficiency occurs because taxes on capital reduce the after-tax return on savings and, thereby, distort the choice that individuals make between current consumption and saving for future consumption. However, although research shows that the demand for some types of savings, such as the demand for tax exempt bonds, is responsive to tax changes, there is greater uncertainty about the effects of tax changes on other choices, such as aggregate savings. Sometimes the concerns about the negative effects of taxation on the U.S. economy are couched in terms of “competitiveness,” where the vaguely defined term competitiveness is often defined as the ability of U.S. businesses to export their products to foreign markets and to compete against foreign imports into the U.S. market. The goal of those who push for this type of competitiveness is to improve the U.S. balance of trade. However, economists generally agree that trying to increase the U.S. balance of trade through targeted tax breaks for exports does not work. Such a policy, aimed at lowering the prices of exports, would be offset by an increase in the value of the dollar which would make U.S. exports more expensive and imports into the Unites States less expensive, ultimately leaving both the balance of trade and the standard of living of Americans unchanged. An alternative definition of competitiveness that is also sometimes used in tax policy debates refers to the ability of U.S.-owned firms operating abroad to compete in foreign markets. The current U.S. policy of taxing the worldwide income of U.S. businesses places some of their foreign operations at a disadvantage. The tradeoffs between a worldwide system and a territorial tax system are discussed below. Tax compliance requirements for businesses are extensive and complex. Rules governing the computation of taxable income, expense deductions, and tax credits of U.S. corporations that do business in multiple foreign countries are particularly complex. But even small businesses face multiple levels of tax requirements of varying difficulty. In addition to computing and documenting their income, expenses, and qualifications for various tax credits, businesses with employees are responsible for collecting and remitting (at varying intervals) several federal taxes on the incomes of those employees. Moreover, if the businesses choose to offer their employees retirement plans and other fringe benefits, they can substantially increase the number of filings they must make. Businesses also have information-reporting responsibilities—employers send wage statements to their employees and to IRS; banks and other financial intermediaries send investment income statements to clients and to IRS. Finally, a relatively small percentage of all businesses (which nevertheless number in the hundreds of thousands) are required to participate in the collection of various federal excise taxes levied on fuels, heavy trucks and trailers, communications, guns, tobacco, and alcohol, among other products. It is difficult for researchers to accurately estimate compliance costs for the tax system as a whole or for particular types of taxpayers because taxpayers generally do not keep records of the time and money spent complying with tax requirements. Studies we found that focus on the compliance costs of businesses estimate them to be between about $40 billion and $85 billion per year. None of these estimates include the costs to businesses of collecting and remitting income and payroll taxes for their employees. The accuracy of these business compliance cost estimates is uncertain due to the low rates of response to their data-collection surveys. In addition, the range in estimates across the studies is due, among other things, to differences in monetary values used (ranging between $25 per hour and $37.26 per hour), differences in the business populations covered, and differences in the tax years covered. Although the precise amount of business tax avoidance is unknown, IRS’s latest estimates of tax compliance show a tax gap of at least $141 billion for tax year 2001 between the business taxes that individual and corporate taxpayers paid and what they should have paid under the law. Corporations contributed about $32 billion to the tax gap by underreporting about $30 billion in taxes on tax returns and failing to pay about $2 billion in taxes that were reported on returns. Individual taxpayers that underreported their business income accounted for the remaining $109 billion of the business income tax gap. A complex tax code, complicated business transactions, and often multinational corporate structures make determining business tax liabilities and the extent of corporate tax avoidance a challenge. Tax avoidance has become such a concern that some tax experts say corporate tax departments have become “profit centers” as corporations seek to take advantage of the tax laws in order to maximize shareholder value. Some corporate tax avoidance is clearly legal, some falls in gray areas of the tax code, and some is clearly noncompliance or illegal, as shown by IRS’s tax gap estimate. Often business tax avoidance is legal. For example, multinational corporations can locate active trade or business operations in jurisdictions that have lower effective tax rates than does the United States and, unless and until they repatriate the income, defer taxation in the United States on that income, thus reducing their effective tax rate. In addition, investors can avoid paying the corporate income tax by putting their money into unincorporated businesses or into real estate. Complicating corporate tax compliance is the fact that in many cases the law is unclear or subject to differing interpretations. In fact, some have postulated that major corporations’ tax returns are actually just the opening bid in an extended negotiation with IRS to determine a corporation’s tax liability. An illustration—once again from the complex area of international tax rules—is transfer pricing. Transfer pricing involves setting the appropriate price for such things as goods, services, or intangible property (such as patents, trademarks, copyrights, technology, or “know-how”) that is transferred between the U.S.-based operations of a multinational company and a foreign affiliate. If the price paid by the affiliate to the U.S. operation is understated, the profits of the U.S. operation are reduced and U.S. taxable income is inappropriately reduced or eliminated. The standard for judging the correct price is the price that would have been paid between independent enterprises acting at “arm’s length.” However, it can be extremely difficult to establish what an arm’s length price would be. Given the global economy and the number of multinational firms with some U.S.-based operations, opportunities for transfer pricing disputes are likely to grow. Tax shelters are one example of how tax avoidance, including corporate tax avoidance, can shade into the illegal. Some tax shelters are legal though perhaps aggressive interpretations of the law, but others cross the line. Abusive shelters often are complex transactions that manipulate many parts of the tax code or regulations and are typically buried among legitimate transactions reported on tax returns. Because these transactions are often composed of many pieces located in several parts of a complex tax return, they are essentially hidden from plain sight, which contributes to the difficulty of determining the scope of the abusive shelter problem. Often lacking economic substance or a business purpose other than generating tax benefits, abusive shelters have been promoted by some tax professionals, often in confidence, for significant fees, sometimes with the participation of tax-indifferent parties, such as foreign or tax-exempt entities. These shelters may involve unnecessary steps and flow-through entities, such as partnerships, which make detection of these transactions more difficult. Regarding compliance with our tax laws, the success of our tax system hinges greatly on individual and business taxpayers’ perception of its fairness and understandability. Compliance is influenced not only by the effectiveness of IRS’s enforcement efforts but also by Americans’ attitudes about the tax system and their government. A recent survey indicated that about 10 percent of respondents say it is acceptable to cheat on their taxes. Furthermore, the complexity of, and frequent revisions to, the tax system make it more difficult and costly for taxpayers who want to comply to do so and for IRS to explain and enforce tax laws. The lack of transparency also fuels disrespect for the tax system and the government. Thus, a crucial challenge in evaluating our business tax system will be to determine how we can best strengthen enforcement of existing laws to give businesses owners confidence that their competitors are paying their fair share and to give wage earners confidence that businesses in general bear their share of taxes. One option that has been suggested as a means of improving public confidence in the tax system’s fairness is to make the reconciliation between book and tax income that businesses present on schedule M-3 of their tax returns available for public review. Reform of our business tax system will necessarily mean making broad design choices about the overall tax system and how business taxes are coordinated with other taxes. The tax reform debate of the last several years has focused attention on several important choices, including the extent to which our system should be closer to the extreme of a pure income tax or the other extreme of a pure consumption tax, the extent to which sales by U.S. businesses outside of this country should be taxed, the extent to which taxes should be collected from businesses or individuals, and the extent to which taxpayers are compensated for losses or costs they incur during the transition to any new tax system. Generally there is no single “right” decision about these choices and the options are not limited to selecting a system that is at one extreme or the other along the continuum of potential systems. The choices will involve making tradeoffs between the various goals for our tax system. The fundamental difference between income and consumption taxes lies in their treatment of savings and investment. Income can be used for either consumption or saving and investment. The tax base of a pure income tax includes all income, regardless of what it is ultimately used for; in contrast, the tax base of a consumption tax excludes income devoted to saving and investment (until it is ultimately used for consumption). The current tax system is a hybrid between a pure income tax and a pure consumption tax because it effectively exempts some types of savings and investment but taxes other types. As noted earlier, evidence is inconclusive regarding whether a shift closer to a consumption tax base would significantly affect the level of savings by U.S. taxpayers. There is, however, a consensus among economists that uneven tax treatment across different types of investment should be avoided unless the efficiency costs resulting from preferential tax treatment are outweighed by the social benefits generated by the tax preference. That objective could be achieved under either a consumption tax that exempts all new savings and investment from taxation (which means that all business profits are exempt) or a revised income tax that taxed all investments at the same effective rate. In comparison to the current system, a consumption tax’s exemption of business-source income would likely encourage U.S. businesses to increase their investment in the United States relative to their foreign investment. Both income and consumption taxes can be structured in a variety of ways, as discussed in the following subsections, and the choice of a specific design for either type of tax can have as significant implications for efficiency, administrability, and equity as the choice between a consumption or income base. The exemption of saving and investment can be accomplished in different ways, so consumption taxes can be structured differently and yet still have the same overall tax base. Both income and consumption taxes can be levied on individuals or businesses, or on a combination of the two. Whether collected from individuals or businesses, ultimately, individuals will bear the economic burden of any tax (as wage earners, shareholders, or consumers). The choice of whether to collect a tax at the business level or the individual level depends on whether it is thought to be desirable to levy different taxes on different individuals. A business-level tax, whether levied on income or consumption, can be collected “at source”—that is, where it is generated—so there can be many fewer tax filers and returns to administer. Business-level taxes cannot, however, directly tax different individuals at different rates. Individual-level taxes can allow for distinctions between different individuals; for example, standard deductions or graduated rates can be used to tax individuals with low income (or consumption) at a lower rate than individuals with greater income (or consumption). However, individual-level taxes require more tax returns, impose higher compliance costs, and would generally require a larger tax administration system. A national retail sales tax, a consumption value-added tax, and an income value-added tax are examples of taxes that would be collected only at the business level. A personal consumption tax and an integrated individual income tax are examples of taxes that would be collected only at the individual level. The “flat tax” proposed by economists Robert Hall and Alvin Rabushka that has received attention in recent years is an example of a tax collected at both the business and individual level. Our current system for taxing corporate-source income involves taxation at both the corporate and individual level in a manner that results in the double taxation of the same income. Under a pure worldwide tax system the United States would tax the income of U.S. corporations, as it is earned, regardless of where it is earned, and at the same time provide a foreign tax credit that ensures that the combined rate of tax that a corporation pays to all governments on each dollar of income is exactly equal to the U.S. corporate tax rate. Some basic differences between the current U.S. tax system and a pure worldwide system are that (1) in many cases the U.S. system permits corporations to defer U.S. tax on their foreign-source income until it is repatriated and (2) the U.S. foreign tax credit is limited to the amount of U.S. tax that would be due on a corporation’s foreign-source income. In cases where the rate of foreign tax on a corporation’s income exceeds the U.S. tax rate, the corporation is left paying the higher rate of tax. Under a pure territorial tax system the United States would simply exempt all foreign-source income. (No major country has a pure territorial system; they all tax mobile forms of foreign-source income, such as royalties and income from securities.) The current U.S. tax system has some features that result in some cases in treatment similar to what would exist under a territorial system. First, corporations can defer U.S. tax indefinitely on certain foreign-source income, as long as they keep it reinvested abroad. Second, in certain cases U.S. corporations are able to use the excess credits that they earned for taxes they paid to high-tax countries to completely offset any U.S. tax that they would normally have to pay on income they earned in low-tax countries. As a result, that income from low-tax countries remains untaxed by the United States—just as it would be under a territorial system. In fact, there are some cases where U.S. corporations enjoy tax treatment that is more favorable than under a territorial system. This occurs when they pay no U.S. tax on foreign-source income yet are still able to deduct expenses allocable to that income. For example, a U.S. parent corporation can borrow money and invest it in a foreign subsidiary. The parent corporation generally can deduct its interest payments from its U.S. taxes even if it defers U.S. tax on the subsidiary’s income by leaving it overseas. Proponents of a worldwide tax system and proponents of a territorial system both argue that their preferred systems would provide important forms of tax neutrality. Under a pure worldwide system all of the income that a U.S. corporation earns abroad would be taxed at the same effective rate that a corporation earning the same amount of income domestically would pay. Such a tax system is neutral in the sense that it does not influence the decision of U.S. corporations to invest abroad or at home. If the U.S. had a pure territorial tax system all of the income that U.S. corporations earn in a particular country would be taxed at the same rate as corporations that are residents of that country. The pure territorial system is neutral in the specific sense that U.S. corporations investing in a foreign country would not be at a disadvantage relative to corporations residing in that country or relative to other foreign corporations investing there. In a world where each country sets its own tax rules it is impossible to achieve both types of neutrality at the same time, so tradeoffs are unavoidable. A change from the current tax system to a pure territorial one is likely to have mixed effects on tax compliance and administration. On the one hand, a pure worldwide tax system, or even the current system, may preserve the U.S. tax base better than a territorial system would because U.S. taxpayers would have greater incentive under a territorial system to shift income and investment into low-tax jurisdictions via transfer pricing. On the other hand, a pure territorial system may be less complex for IRS to administer and for taxpayers to comply with than the current tax system because there would be no need for the antideferral rules or the foreign tax credit, which are among the most complex features of the current system. Broad-based consumption taxes can differ depending on whether they are imposed under a destination principle, which holds that goods and services should be taxed in the countries where they are consumed, or an origin principle, which holds that goods and services should be taxed in the countries where they are produced. In the long run, after markets have adjusted, neither type of tax would have a significant effect on the U.S. trade balance. This is true for a destination-based tax because products consumed in the United States would be taxed at the same rate, regardless of where they were produced. Therefore, such a tax would not influence a consumer’s choice between buying a car produced in the United States or one imported from Japan. And at the same time, U.S. exports of cars would not be affected by the tax because they would be exempted. An origin-based consumption tax would not affect the trade balance because the tax effects that taxes have on prices would ultimately be countered by the same price adjustment mechanism that we discussed earlier with respect to targeted tax subsidies for exports. A national retail sales tax limited to final consumption goods would be a destination-principle tax; it would tax imports when sold at retail in this country and would not tax exports. Value-added taxes can be designed as either destination or origin-principle taxes. A personal consumption tax, collected at the individual level, would apply to U.S. residents or citizens and could be formulated to tax their consumption regardless of whether it is done domestically or overseas. Under such a system, income earned abroad would be taxable but funds saved or invested abroad would be deductible. In that case, foreign- produced goods imported into the United States or consumed by U.S. citizens abroad would be taxed. U.S. exports would only be taxed to the extent that they are consumed by U.S. citizens abroad. A wide range of options exist for moving from the current business tax system to an alternative one, and the way that any transition is formulated could have significant effects for economic efficiency, equity, taxpayer compliance burden, and tax administration. For example, one transition issue involves whether tax credits and other tax benefits already earned under the current tax would be made available under a new system. Businesses that are deducting depreciation under the current system would not have the opportunity to continue depreciating their capital goods under a VAT unless special rules were included to permit it. Similar problems could arise with businesses’ carrying forward net operating losses and recovering unclaimed tax credits. Depending on how these and other issues are addressed, taxpayer compliance burden and tax administration responsibilities could be greater during the transition period than they currently are or than they would be once the transition ends. Transition rules could also substantially reduce the new system’s tax base, thereby requiring higher tax rates during the transition if revenue neutrality were to be achieved. Our publication, Understanding the Tax Reform Debate: Background, Criteria, and Questions, may be useful in guiding policymakers as they consider tax reform proposals. It was designed to aid policymakers in thinking about how to develop tax policy for the 21st century. The criteria for a good tax system, which our report discusses, provide the basis for a set of principles that should guide Congress as it considers the choices and tradeoffs involved in tax system reform. And, as I also noted earlier, proposals for reforming business taxation cannot be evaluated without considering how that business taxation will interact with and complement the other elements of our overall future tax system. The proposed system should raise sufficient revenue over time to fund our expected expenditures. As I mentioned earlier, we will fall woefully short of achieving this end if current spending or revenue trends are not altered. Although we clearly must restructure major entitlement programs and the basis of other federal spending, it is unlikely that our long-term fiscal challenge will be resolved solely by cutting spending. The proposal should look to future needs. Like many spending programs, the current tax system was developed in a profoundly different time. We live now in a much more global economy, with highly mobile capital, and with investment options available to ordinary citizens that were not even imagined decades ago. We have growing concentrations of income and wealth. More firms operate multinationally and willingly move operations and capital around the world as they see best for their firms. As an adjunct to looking forward when making reforms, better information on existing commitments and promises must be coupled with estimates of the long-term discounted net present value costs from spending and tax commitments comprising longer-term exposures for the federal budget beyond the existing 10-year budget projection window. The tax base should be as broad as possible. Broad-based tax systems with minimal exceptions have many advantages. Fewer exceptions generally means less complexity, less compliance cost, less economic efficiency loss, and by increasing transparency may improve equity or perceptions of equity. This suggests that eliminating or consolidating numerous tax expenditures must be considered. In many cases tax preferences are simply a form of “back-door spending.” We need to be sure that the benefits achieved from having these special provisions are worth the associated revenue losses just as we must ensure that outlay programs— which may be attempting to achieve the same purposes as tax expenditures—achieve outcomes commensurate with their costs. And it is important to supplement these cost-benefit evaluations with analyses of distributional effects—i.e., who bears the costs of the preferences and who receives the benefits. To the extent tax expenditures are retained, consideration should be given to whether they could be better targeted to meet an identified need. If we must raise revenues, doing so from a broad base and a lower rate will help minimize economic efficiency costs. Broad-based tax systems can yield the same revenue as more narrowly based systems at lower tax rates. The combination of less direct intervention in the marketplace from special tax preferences, and the lower rates possible from broad-based systems, can have substantial benefits for economic efficiency. For instance, one commonly cited rule of thumb regarding economic efficiency costs of tax increases is that they rise proportionately faster than the tax rates. In other words, a 10 percent tax increase could raise the economic efficiency costs of a tax system by much more than 10 percent. Aside from the base-broadening that minimizes targeted tax preferences favoring specific types of investment or other business behavior, it is also desirable on the grounds of economic efficiency to extend the principle of tax neutrality to the broader structural features of a business tax system. For example, improvements in economic efficiency can also be gained by avoiding differences in tax treatment, such as the differences in the current system based on legal form of organization, source of financing, and the nature and location of foreign operations. Removing such differences can shift resources to more productive uses, increasing economic performance and the standard of living of Americans. Shifting resources to more productive uses can result in a step up in the level of economic activity which would be measured as a one-time increase in the rate of growth. Tax changes that increase efficiency can also increase the long-term rate of economic growth if they increase the rate of technological change; however, not all efficiency-increasing tax changes will do so. Impact on the standard of living of Americans is also a useful criterion for evaluating policies to improve U.S. competitiveness. As was discussed earlier, narrower goals and policies, such as increasing the U.S. balance of trade through targeted tax breaks aimed at encouraging exports, are generally viewed as ineffective by economists. What determines the standard of living of Americans and how it compares to the standard of living in other countries is the productivity of American workers and capital. That productivity is determined by factors such as education, technological innovation, and the amount of investment in the U.S. economy. Tax policy can contribute to American productivity in several ways. One, discussed in this statement, is through neutral taxation of investment alternatives. Another, which I have discussed on many occasions, is through fiscal policy. Borrowing to finance persistent federal deficits absorbs savings from the private sector reducing funds available for investment. Higher saving and investment from a more balanced fiscal policy would contribute to increased productivity and a higher standard of living for Americans over the long term. A reformed business tax system should have attributes associated with high compliance rates. Because any tax system can be subject to tax gaps, the administrability of reformed systems should be considered as part of the debate for change. In general, a reformed system is most likely to have a small tax gap if the system has few tax preferences or complex provisions and taxable transactions are transparent. Transparency in the context of tax administration is best achieved when third parties report information both to the taxpayer and the tax administrator. Minimizing tax code complexity has the potential to reduce noncompliance for at least three broad reasons. First, it could help taxpayers to comply voluntarily with more certainty, reducing inadvertent errors by those who want to comply but are confused because of complexity. Second, it may limit opportunities for tax evasion, reducing intentional noncompliance by taxpayers who can misuse the complex code provisions to hide their noncompliance or to achieve ends through tax shelters. Third, reducing tax-code complexity could improve taxpayers’ willingness to comply voluntarily. Finally, the consideration of transition rules needs to be an integral part of the design of a new system. The effects of these rules can be too significant to leave them simply as an afterthought in the reform process. The problems that I have reviewed today relating to the compliance costs, efficiency costs, equity, and tax gap associated with the current business tax system would seem to make a strong case for a comprehensive review and reform of our tax policy. Further, businesses operate in a world that is profoundly different—more competitive and more global—than when many of the existing provisions of the tax code were adopted. Despite numerous and repeated calls for reform, progress has been slow. I discussed reasons for the slow progress in a previous hearing on individual tax reform before this committee. One reason why reform is difficult to accomplish is that the provisions of the tax code that generate compliance costs, efficiency costs, the tax gap and inequities also benefit many taxpayers. Reform is also difficult because, even when there is agreement on the amount of revenue to raise, there are differing opinions on the appropriate balance among the often conflicting objectives of equity, efficiency, and administrability. This, in turn, leads to widely divergent views on even the basic direction of reform. However, I have described some basic principles that ought to guide business tax reform. One of them is revenue sufficiency. Fiscal necessity, prompted by the nation’s unsustainable fiscal path, will eventually force changes to our spending and tax policies. We must fundamentally rethink policies and everything must be on the table. Tough choices will have to be made about the appropriate degree of emphasis on cutting back federal programs versus increasing tax revenue. Other principles, such as broadening the tax base and otherwise promoting tax neutrality, could help improve economic performance. While economic growth alone will not solve our long-term fiscal problems, an improvement in our overall economic performance makes dealing with those problems easier. The recent report of the President’s Advisory Panel on Federal Tax Reform recommended two different tax reform plans. Although each plan is intended to improve economic efficiency and simplify the tax system, neither of them addresses the growing imbalance between federal spending and revenues that I have highlighted. One approach for getting the process of comprehensive fiscal reform started would be through the establishment of a credible, capable, and bipartisan commission, to examine options for a combination of selected entitlement and tax reform issues. Mr. Chairman and Members of the Committee, this concludes my statement. I would be pleased to answer any questions you may have at this time. For further information on this testimony, please contact James White on (202) 512-9110 or whitej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals making key contributions to this testimony include Jim Wozny, Assistant Director; Donald Marples; Jeff Arkin; and Cheryl Peterson. Individual Income Tax Policy: Streamlining, Simplification, and Additional Reforms Are Desirable. GAO-06-1028T. Washington, D.C.: August 3, 2006. Tax Compliance: Opportunities Exist to Reduce the Tax Gap Using a Variety of Approaches. GAO-06-1000T. Washington, D.C.: July 26, 2006. Tax Compliance: Challenges to Corporate Tax Enforcement and Options to Improve Securities Basis Reporting. GAO-06-851T. Washington, D.C.: June 13, 2006. Understanding the Tax Reform Debate: Background, Criteria, & Questions. GAO-05-1009SP. Washington, D.C.: September 2005. Government Performance and Accountability: Tax Expenditures Represent a Substantial Federal Commitment and Need to Be Reexamined. GAO-05-690. Washington, D.C.: Sept. 23, 2005. Tax Policy: Summary of Estimates of the Costs of the Federal Tax System. GAO-05-878. Washington, D.C.: August 26, 2005. Tax Compliance: Reducing the Tax Gap Can Contribute to Fiscal Sustainability but Will Require a Variety of Strategies. GAO-05-527T. Washington, D.C.: April 14, 2005. 21st Century Challenges: Reexamining the Base of the Federal Government. GAO-05-325SP. Washington, D.C.: February 1, 2005. Tax Administration: Potential Impact of Alternative Taxes on Taxpayers and Administrators. GAO/GGD-98-37. Washington, D.C.: January 14, 1998. Corporate Income Tax Rates: International Comparisons. Washington, D.C.: November 2005. Taxing Capital Income: Effective Rates and Approaches to Reform. Washington, D.C.: October 2005. Effects of Adopting a Value-Added Tax. Washington, D.C.: February 1992. Brumbaugh, David L. Taxes and International Competitiveness. RS22445. Washington, D.C.: May 19, 2006. Brumbaugh, David L. Federal Business Taxation: The Current System, Its Effects, and Options for Reform. RL33171. Washington, D.C.: December 20, 2005. Gravelle, Jane G.. Capital Income Tax Revisions and Effective Tax Rates. RL32099. Washington, D.C.: January 5, 2005. The Impact of International Tax Reform: Background and Selected Issues Relating to U.S. International Tax Rules and the Competitiveness of U.S. Businesses. JCX-22-06. Washington, D.C.: June 21, 2006. Options to Improve Tax Compliance and Reform Tax Expenditures. JCS- 02-05. Washington, D.C.: January 27, 2005. The U.S. International Tax Rules: Background, Data, and Selected Issues Relating to the Competitiveness of U.S.-Based Business Operations. JCX- 67-03. Washington, D.C.: July 3, 2003. Background Materials on Business Tax Issues Prepared for the House Committee on Ways and Means Tax Policy Discussion Series. JCX-23-02. Washington, D.C.: April 4, 2002. Report to The Congress on Depreciation Recovery Periods and Methods. Washington, D.C.: July 2000. Integration of The Individual and Corporate Tax Systems: Taxing Business Income Once. Washington, D.C.: January 1992. Simple, Fair, and Pro-Growth: Proposals to Fix America’s Tax System. Washington, D.C.: November 2005. Over the past decade, several proposals for fundamental tax reform have been put forward. These proposals would significantly change tax rates, the tax base, and the level of tax (whether taxes are collected from individuals, businesses, or both). Some of the proposals would replace the federal income tax with some type of consumption tax levied only on businesses. Consumption taxes levied only on businesses include retail sales taxes (RST) and value-added taxes (VAT). The flat tax would also change the tax base to consumption but include both a relatively simple individual tax along with a business tax. A personal consumption tax, a consumption tax levied primarily on individuals, has also been proposed. Similar changes in the level at which taxes are collected could be made while retaining an income tax base. This appendix provides a brief description of several of these proposals. The consumption tax that Americans are most familiar with is the retail sales tax, which in many states, is levied when goods or services are purchased at the retail level. The RST is a consumption tax because only goods purchased by consumers are taxed, and sales to businesses, including sales of investment goods, are generally exempt from tax. In contrast to an income tax, then, income that is saved is not taxed until it is used for consumption. Under a national RST, different tax rates could be applied to different goods, and the sale of some goods could carry a zero tax rate (exemption). However, directly taxing different individuals at different rates for the same good would be very difficult. A consumption VAT, which like the RST, is a business-level consumption tax levied directly on the purchase of goods and services. The two taxes differ in the manner in which the tax is collected and paid. In contrast to a retail sales tax, sales of goods and services to consumers and to businesses are taxable under a VAT. However, businesses can either deduct the amount of their purchases of goods and services from other businesses (under a subtraction VAT) or can claim a credit for tax paid on purchases from other businesses (under a credit VAT). Under either method, sales between businesses do not generate net tax liability under a VAT because the amount included in the tax base by businesses selling goods is equal to the amount deducted by the business purchasing goods. The only sales that generate net revenue for the government are sales between businesses and consumers, which is the same case as the RST. An income VAT would move the taxation of wage income to the business level as well. No individual returns would be necessary, so the burden of complying with the tax law would be eliminated for individuals. An income VAT would not allow businesses to deduct dividends, interest, or wages, so the income VAT remitted by businesses would include tax on these types of income. Calculations would not have to be made for different individuals, which would simplify tax administration and compliance burdens but not allow for treating different individuals differently. The flat tax was developed in the early 1980s by economists Robert Hall and Alvin Rabushka. The Hall-Rabushka flat tax proposal includes both an individual tax and a business tax. As described by Hall and Rabushka, the flat tax is a modification of a VAT; the modifications make the tax more progressive (less regressive) than a VAT. In particular, the business tax base is designed to be the same as that of a VAT, except that businesses are allowed to deduct wages and retirement income paid out as well as purchases from other businesses. Wage and retirement income is then taxed when received by individuals at the same rate as the business tax rate. By including this individual-level tax as well as the business tax, standard deductions can be made available to individuals. Individuals with less wage and retirement income than the standard deduction amounts would not owe any tax. A personal consumption tax would look much like a personal income tax. The major difference between the two is that under the consumption tax, taxpayers would include all income received, amounts borrowed, and cash flows received from the sale of assets, and then deduct the amount they saved. The remaining amount would be a measure of the taxpayer’s consumption over the year. When funds are withdrawn from bank accounts, or stocks or bonds are sold, both the original amount saved and interest earned are taxable because they are available for consumption. If withdrawn funds are reinvested in another qualified account or in stock or bonds, the taxable amount of the withdrawal would be offset by the deduction for the same amount that is reinvested. While the personal consumption tax would look like a personal income tax, the tax base would be the same as an RST. Instead of collecting tax on each sale of consumer products at the business level, a personal consumption tax would tax individuals annually on the sum of all their purchases of consumption goods. Because it is an individual-level tax, different tax rates could be applied to different individuals so that the tax could be made more progressive, and other taxpayer characteristics, such as family size, could be taken into account if desired. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Business income taxes, both corporate and noncorporate, are a significant portion of federal tax revenue. Businesses also play a crucial role in collecting taxes from individuals, through withholding and information reporting. However, the design of the current system of business taxation is widely seen as flawed. It distorts investment decisions, hurting the performance of the economy. Its complexity imposes planning and record keeping costs, facilitates tax shelters, and provides potential cover for those who want to cheat. Not surprisingly, business tax reform is part of the debate about overall tax reform. The debate is occurring at a time when long-range projections show that, without a policy change, the gap between spending and revenues will widen. This testimony reviews the nation's long term fiscal imbalance and what is wrong with the current system of business taxation and provides some principles that ought to guide the debate about business tax reform. This statement is based on previously published GAO work and reviews of relevant literature. The size of business tax revenues makes them very relevant to any plan for addressing the nation's long-term fiscal imbalance. Reexamining both federal spending and revenues, including business tax policy and compliance must be part of a multipronged approach to address the imbalance. Some features of current business taxes channel investments into tax-favored activities and away from more productive activities and, thereby, reduce the economic well-being of all Americans. Complexity in business tax laws imposes costs of its own, facilitates tax shelters, and provides potential cover for those who want to cheat. IRS's latest estimates show a business tax gap of at least $141 billion for 2001. This in turn undermines confidence in the fairness of our tax system--citizens' confidence that their friends, neighbors, and business competitors pay their fair share of taxes. Principles that should guide the business tax reform debate include: (1) The proposed system should raise sufficient revenue over time to fund our current and future expected expenditures. (2) The tax base should be as broad as possible, which helps to minimize overall tax rates. (3) The proposed system should improve compliance rates by reducing tax preferences and complexity and increasing transparency. (4) To the extent other goals, such as equity and simplicity, allow, the tax system should aim for neutrality by not favoring some business activities over others. More neutral tax policy has the potential to enhance economic growth, increase productivity and improve the competitiveness of the U.S. economy in terms of standard of living. (5) The consideration of transition rules must be an integral part of any reform proposal.
There are some similarities in how Medicare pays ASCs and hospital outpatient departments for the procedures they perform. However, the methods used by CMS to calculate the payment rates in each system, as well as the mechanisms used to revise the Medicare payment rates, differ. In 1980, legislation was enacted that enabled ASCs to bill Medicare for certain surgical procedures provided to Medicare beneficiaries. Under the ASC payment system, Medicare pays a predetermined, and generally all- inclusive, amount per procedure to the facility. The approximately 2,500 surgical procedures that ASCs may bill for under Medicare are assigned to one of nine payment groups that contain procedures with similar costs, but not necessarily clinical similarities. All procedures assigned to one payment group are paid at the same rate. Under the Medicare payment system, when more than one procedure is performed at the same time, the ASC receives a payment for each of the procedures. However, the procedure that has the highest payment rate receives 100 percent of the applicable payment, and each additional procedure receives 50 percent of the applicable payment. The Medicare payment for a procedure performed at an ASC is intended to cover the direct costs for a procedure, such as nursing and technician services, drugs, medical and surgical supplies and equipment, anesthesia materials, and diagnostic services (including imaging services), and the indirect costs associated with the procedure, including use of the facility and related administrative services. The ASC payment for a procedure does not include payment for implantable devices or prosthetics related to the procedure; ASCs may bill separately for those items. In addition, the payment to the ASC does not include payment for professional services associated with the procedure; the physician who performs the procedure and the anesthesiologist or anesthetist bill Medicare directly for their services. Finally, the ASC payment does not include payment for certain other services that are not directly related to performing the procedure and do not occur during the time that the procedure takes place, such as some laboratory, X-ray, and other diagnostic tests. Because these additional services are not ASC procedures, they may be performed by another provider. In those cases, Medicare makes payments to those providers for the additional services. For example, a laboratory service needed to evaluate a tissue sample removed during an ASC procedure is not included in the ASC payment. The provider that evaluated the tissue sample would bill and receive payment from Medicare for that service. Because ASCs receive one inclusive payment for the procedure performed and its associated services, such as drugs, they generally include on their Medicare claim only the procedure performed. In 1997, legislation was enacted that required the implementation of a prospective payment system for hospital outpatient departments; the OPPS was implemented in August 2000. Although ASCs perform only procedures, hospital outpatient departments provide a much broader array of services, including diagnostic services, such as X-rays and laboratory tests, and emergency room and clinic visits. Each of the approximately 5,500 services, including procedures, that hospital outpatient departments perform is assigned to one of over 800 APC groups with other services with clinical and cost similarities for payment under the OPPS. All services assigned to one APC group are paid the same rate. Similar to ASCs, when hospitals perform multiple procedures at the same time, they receive 100 percent of the applicable payment for the procedure that has the highest payment rate, and 50 percent of the applicable payment for each additional procedure, subject to certain exceptions. Like payments to ASCs, payment for a procedure under the OPPS is intended to cover the costs of the use of the facility, nursing and technician services, most drugs, medical and surgical supplies and equipment, anesthesia materials, and administrative costs. Medicare payment to a hospital for a procedure does not include professional services for physicians or other nonphysician practitioners. These services are paid for separately by Medicare. However, there are some differences between ASC and OPPS payments for procedures. Under the OPPS, hospital outpatient departments generally may not bill separately for implantable devices related to the procedure, but they may bill separately for additional services that are directly related to the procedure, such as certain drugs and diagnostic services, including X-rays. Hospital outpatient departments also may bill separately for additional services that are not directly related to the procedure and do not occur during the procedure, such as laboratory services to evaluate a tissue sample. Because they provide a broader array of services, and because CMS has encouraged hospitals to report all services provided during a procedure on their Medicare claims for rate-setting purposes, hospital claims may provide more detail about the services delivered during a procedure than ASC claims do. CMS set the initial 1982 ASC payment rates based on cost and charge data from 40 ASCs. At that time, there were about 125 ASCs in operation. Procedures were placed into four payment groups, and all procedures in a group were paid the same rate. When the ASC payment system was first established, federal law required CMS to review the payment rates periodically. In 1986, CMS conducted an ASC survey to gather cost and charge data. In 1990, using these data, CMS revised the payment rates and increased the number of payment groups to eight. A ninth payment group was established in 1991. These groups are still in use, although some procedures have been added to or deleted from the ASC-approved list. Although payments have not been revised using ASC cost data since 1990, the payment rates have been periodically updated for inflation. In 1994, Congress required that CMS conduct a survey of ASC costs no later than January 1, 1995, and thereafter every 5 years, to revise ASC payment rates. CMS conducted a survey in 1994 to collect ASC cost data. In 1998, CMS proposed revising ASC payment rates based on the 1994 survey data and assigned procedures performed at ASCs into payment groups that were comparable to the payment groups it was developing for the same procedures under the OPPS. However, CMS did not implement the proposal, and, as a result, the ASC payment system was not revised using the 1994 data. In 2003, MMA eliminated the requirement to conduct ASC surveys every 5 years and required CMS to implement a revised ASC payment system no later than January 1, 2008. During the course of our work, in August 2006, CMS published a proposed rule that would revise the ASC payment system effective January 1, 2008. In this proposed rule, CMS bases the revised ASC payment rates on the OPPS APC groups. However, the payment rates would be lower for ASCs. The initial OPPS payment rates, implemented in August 2000, were based on hospitals’ 1996 costs. To determine the OPPS payment rates, CMS first calculates each hospital’s cost for each service by multiplying the charge for that service by a cost-to-charge ratio computed from the hospital’s most recently reported data. After calculating the cost of each service for each hospital, the services are grouped by their APC assignment, and a median cost for each APC group is calculated from the median costs of all services assigned to it. Using the median cost, CMS assigns each APC group a weight based on its median cost relative to the median cost of all other APCs. To obtain a payment rate for each APC group, CMS multiplies the relative weight by a factor that converts it to a dollar amount. Beginning in 2002, as required by law, the APC group payment rates have been revised annually based on the latest charge and cost data. In addition, the payment rates for services paid under the OPPS receive an annual inflation update. We found many similarities in the additional services provided by ASCs and hospital outpatient departments with the top 20 procedures. Of the additional services billed with a procedure, few resulted in an additional payment in one setting but not the other. Hospitals were paid for some of the related additional services they billed with the procedures. In the ASC setting, other providers billed Medicare for these services and received payment for them. In our analysis of Medicare claims, we found many similarities in the additional services billed in the ASC or hospital outpatient department setting with the top 20 procedures. The similar additional services are illustrated in the following four categories of services: additional procedures, laboratory services, radiology services, and anesthesia services. First, one or more additional procedures was billed with a procedure performed in either the ASC or hospital outpatient department setting for 14 of the top 20 procedures. The proportion of time each additional procedure was billed in each setting was similar. For example, when a hammertoe repair procedure was performed, our analysis indicated that another procedure to correct a bunion was billed 11 percent of the time in the ASC setting, and in the hospital outpatient setting, the procedure to correct a bunion was billed 13 percent of the time. Similarly, when a diagnostic colonoscopy was performed, an upper gastrointestinal (GI) endoscopy was billed 11 percent of the time in the ASC setting, and in the hospital setting, the upper GI endoscopy was billed 12 percent of the time. For 11 of these 14 procedures, the proportion of time each additional procedure was billed differed by less than 10 percentage points between the two settings. For the 3 remaining procedures, the percentage of time that an additional procedure was billed did not vary by more than 25 percentage points between the two settings. See appendix III for a complete list of the additional procedures billed and the proportion of time they were billed in each setting. Second, laboratory services were billed with 10 of the top 20 procedures in the hospital outpatient department setting and 7 of the top 20 procedures in the ASC setting. While these services were almost always billed by the hospital in the outpatient setting, they were typically not billed by the ASCs. These laboratory services were present in our analysis in the ASC setting because they were performed and billed by another Medicare part B provider. Third, four different radiology services were billed with 8 of the top 20 procedures. Radiology services were billed with 5 procedures in the ASC setting and with 8 procedures in the hospital outpatient department setting. The radiology services generally were included on the hospital outpatient department bills but rarely were included on the ASC bills. Similar to laboratory services, hospital outpatient departments billed for radiology services that they performed in addition to the procedures. When radiology services were billed with procedures in the ASC setting, these services generally were performed and billed by another part B provider. Fourth, anesthesia services were billed with 17 of the top 20 procedures in either the ASC or hospital outpatient settings and with 14 procedures in both settings. In virtually every case in the ASC setting, and most cases in the hospital outpatient department setting, these services were billed by another part B provider. According to our analysis, ASCs did not generally include any services other than the procedures they performed on their bills. However, in the hospital outpatient setting, some additional services were included on the hospitals’ bills. We believe this is a result of the structure of the two payment systems. As ASCs generally receive payment from Medicare only for procedures, they typically include only those procedures on their bills. In contrast, hospital outpatient departments’ bills often include many of the individual items or services they provide as a part of a procedure because CMS has encouraged them to do so, whether the items or services are included in the OPPS payment or paid separately. With the exception of additional procedures, there were few separate payments that could be made for additional services provided with the top 20 procedures because most of the services in our analysis were included in the Medicare payment to the ASC or hospital. Under both the Medicare ASC and OPPS payment systems, when more than one procedure is performed at the same time, the facility receives 100 percent of the applicable payment for the procedure that has the highest payment rate and 50 percent of the applicable payment for each additional procedure. As this policy is applicable to both settings, for those instances in our analysis when an additional procedure was performed with one of the top 20 procedures in either setting, the ASC or hospital outpatient department received 100 percent of the payment for the procedure with the highest payment rate and 50 percent of the payment for each lesser paid procedure. Individual drugs were billed by hospital outpatient departments for most of the top 20 procedures, although they were not present on the claims from ASCs, likely because ASCs generally cannot receive separate Medicare payments for individual drugs. However, none of the individual drugs billed by the hospital outpatient departments in our analysis resulted in an additional payment to the hospitals. In each case, the cost of the particular drug was included in the Medicare payment for the procedure. In the case of the laboratory services billed with procedures in the ASC and hospital outpatient department settings, those services were not costs included in the payment for the procedure in either setting and were paid separately in each case. For both settings, the payment was made to the provider that performed the service. In the case of the hospital outpatient department setting, the payment was generally made to the hospital, while, for procedures performed at ASCs, payment was made to another provider who performed the service. Of the four radiology services in our analysis, three were similar to the laboratory services in that they are not included in the cost of the procedure and are separately paid services under Medicare. Therefore, when hospitals provided these services, they received payment for them. In the ASC setting, these services were typically billed by a provider other than the ASC, and the provider received payment for them. The fourth radiology service is included in the payment for the procedure with which it was associated. Therefore, no separate payment was made to either ASCs or hospital outpatient departments. With regard to anesthesia services, most services were billed by and paid to a provider other than an ASC or hospital. As a group, the costs of procedures performed in ASCs have a relatively consistent relationship with the costs of the APC groups to which they would be assigned under the OPPS. That is, the APC groups accurately reflect the relative costs of procedures performed in ASCs. We found that the ASC-to-APC cost ratios were more tightly distributed around their median cost ratio than the OPPS-to-APC cost ratios were around their median cost ratio. Specifically, 45 percent of all procedures in our analysis fell within 0.10 points of the ASC-to-APC median cost ratio, and 33 percent of procedures fell within 0.10 points of the OPPS-to-APC median cost ratio. However, the costs of procedures in ASCs are substantially lower than costs for the same procedures in the hospital outpatient setting. The APC groups reflect the relative costs of procedures provided by ASCs as well as they reflect the relative costs of procedures provided in the hospital outpatient department setting. In our analysis, we listed the procedures performed at ASCs and calculated the ratio of the cost of each procedure to the cost of the APC group to which it would have been assigned, referred to as the ASC-to-APC cost ratio. We then calculated similar cost ratios for the same procedures exclusively within the OPPS. To determine an OPPS-to-APC cost ratio, we divided individual procedures’ median costs, as calculated by CMS for the OPPS, by the median cost of their APC group. Our analysis of the cost ratios showed that the ASC-to-APC cost ratios were more tightly distributed around their median than were the OPPS-to-APC cost ratios; that is, there were more of them closer to the median. Specifically, 45 percent of procedures performed in ASCs fell within a 0.10 point range of the ASC-to-APC median cost ratio, and 33 percent of those procedures fell within a 0.10 point range of the OPPS-to-APC median cost ratio in the hospital outpatient department setting (see figs. 1 and 2). Therefore, there is less variation in the ASC setting between individual procedures’ costs and the costs of their assigned APC groups than there is in the hospital outpatient department setting. From this outcome, we determined that the OPPS APC groups could be used to pay for procedures in ASCs. The median costs of procedures performed in ASCs were generally lower than the median costs of their corresponding APC group under the OPPS. Among all procedures in our analysis, the median ASC-to-APC cost ratio was 0.39. The ASC-to-APC cost ratios ranged from 0.02 to 3.34. When weighted by Medicare volume based on 2004 claims data, the median ASC- to-APC cost ratio was 0.84. We determined that the median OPPS-to-APC cost ratio was 1.04. This analysis shows that when compared to the median cost of the same APC group, procedures performed in ASCs had substantially lower costs than when those same procedures were performed in hospital outpatient departments. Generally, there are many similarities between the additional services provided in ASCs and hospital outpatient departments with one of the top 20 procedures, and few resulted in an additional Medicare payment to ASCs or hospital outpatient departments. Although costs for individual procedures vary, in general, the median costs for procedures are lower in ASCs, relative to the median costs of their APC groups, than the median costs for the same procedures in the hospital outpatient department setting. The APC groups in the OPPS reflect the relative costs of procedures performed in ASCs in the same way that they reflect the relative costs of the same procedures when they are performed in hospital outpatient departments. Therefore, the APC groups could be applied to procedures performed in ASCs, and the OPPS could be used as the basis for an ASC payment system, eliminating the need for ASC surveys and providing for an annual revision of the ASC payment groups. We recommend that the Administrator of CMS implement a payment system for procedures performed in ASCs based on the OPPS. The Administrator should take into account the lower relative costs of procedures performed in ASCs compared to hospital outpatient departments in determining ASC payment rates. We received written comments on a draft of this report from CMS (see app. IV). We also received oral comments from external reviewers representing two ASC industry organizations, AAASC and FASA. In commenting on a draft of this report, CMS stated that our recommendation is consistent with its August 2006 proposed revisions to the ASC payment system. Industry representatives who reviewed a draft of this report did not agree or disagree with our recommendation for executive action. They did, however, provide several comments on the draft report. The industry representatives noted that we did not analyze the survey results to examine differences in per-procedure costs among single-specialty and multi-specialty ASCs. Regarding this comment, we initially considered developing our survey sample stratified by ASC specialty type. However, because accurate data identifying ASCs’ specialties do not exist, we were unable to stratify our survey sample by specialty type. The industry representatives asked us to provide more explanation in our scope and methodology regarding our development of a relative weight scale for Medicare ASC-approved procedures to capture the general variation in resources associated with performing different procedures. We expanded the discussion of how we developed the relative weight scale in our methodology section. Reviewers also made technical comments, which we incorporated where appropriate. We are sending a copy of this report to the Administrator of CMS and appropriate congressional committees. The report is available at no charge on GAO’s Web site at http://www.gao.gov. We will also make copies available to others on request. If you or your staff members have any questions about this report, please contact me at (202) 512-7119 or kingk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made significant contributions to this report are listed in appendix V. The Medicare payment rates for ambulatory surgical centers (ASC), along with those of other facilities, are adjusted to account for the variation in labor costs across the country. To calculate payment rates for individual ASCs, the Centers for Medicare & Medicaid Services (CMS) calculates the share of total costs that are labor-related and then adjusts ASCs’ labor- related share of costs based on a wage index calculated for specific geographic areas across the country. The wage index reflects how the average wage for health care personnel in each geographic area compares to the national average health care personnel wage. The geographic areas are intended to represent the separate labor markets in which health care facilities compete for employees. In setting the initial ASC payment rates for 1982, CMS determined from the first survey of ASCs that one-third of their costs were labor-related. The labor-related costs included employee salaries and fringe benefits, contractual personnel, and owners’ compensation for duties performed for the facility. To determine the payment rates for each individual ASC, CMS multiplied one-third of the payment rate for each procedure—the labor- related portion—by the local area wage index. Each ASC received the base payment rate for two-thirds of the payment rate—the nonlabor-related portion—for each procedure. The sum of the labor-related and nonlabor- related portions equaled each ASC’s payment rate for each procedure. In 1990, when CMS revised the payment system based on a 1986 ASC survey, CMS found ASCs’ average labor-related share of costs to be 34.45 percent and used this percentage as the labor-related portion of the payment rate. In a 1998 proposed rule, CMS noted that ASCs’ share of labor-related costs as calculated from the 1994 ASC cost survey had increased to an average of 37.66 percent, slightly higher than the percentage calculated from the 1986 survey. However, CMS did not implement the 1998 proposal. Currently, the labor-related proportion of costs from CMS’s 1986 survey, 34.45 percent, is used for calculating ASC payment rates. Using 2004 cost data we received from 290 ASCs that responded to our survey request for information, we determined that the mean labor-related proportion of costs was 50 percent, and the range of the labor-related costs for the middle 50 percent of our ASC facilities was 43 percent to 57 percent of total costs. To compare the delivery of procedures between ASCs and hospital outpatient departments, we analyzed Medicare claims data from 2003. To compare the relative costs of procedures performed in ASCs and hospital outpatient departments, we collected cost and procedure data from 2004 from a sample of Medicare-participating ASCs. We also interviewed officials at CMS and representatives from ASC industry organizations, specifically, the American Association of Ambulatory Surgery Centers (AAASC) and FASA, physician specialty societies, and nine ASCs. To compare the delivery of additional services provided with procedures performed in ASCs and hospital outpatient departments, we identified all additional services frequently billed in each setting when one of the top 20 procedures with the highest Medicare ASC claims volume is performed. These procedures represented approximately 75 percent of all Medicare ASC claims in 2003. Using Medicare claims data for 2003, we identified beneficiaries receiving one of the top 20 procedures in either an ASC or hospital outpatient department, then identified any other claims for those beneficiaries from ASCs, hospital outpatient departments, durable medical equipment suppliers, and other Medicare part B providers. We identified claims for the beneficiaries on the day the procedure was performed and the day after. We created a list that included all additional services that were billed at least 10 percent of the time with each of the top 20 procedures when they were performed in ASCs. We created a similar list of additional services for each of the top 20 procedures when they were performed in hospital outpatient departments. We then compared the lists for each of the top 20 procedures between the two settings to determine whether there were similarities in the additional services that were billed to Medicare. To compare the Medicare payments for procedures performed in ASCs and hospital outpatient departments, we identified whether any additional services included in our analysis resulted in an additional payment. We used Medicare claims data from the National Claims History (NCH) files. These data, which are used by the Medicare program to make payments to health care providers, are closely monitored by both CMS and the Medicare contractors that process, review, and pay claims for Medicare services. The data are subject to various internal controls, including checks and edits performed by the contractors before claims are submitted to CMS for payment approval. Although we did not review these internal controls, we did assess the reliability of the NCH data. First, we reviewed all existing information about the data, including the data dictionary and file layouts. We also interviewed experts at CMS who regularly use the data for evaluation and analysis. We found the data to be sufficiently reliable for the purposes of this report. To compare the relative costs of procedures performed in ASCs and hospital outpatient departments, we first compiled information on ASCs’ costs and procedures performed. Because there were no recent existing data on ASC costs, we surveyed 600 ASCs, randomly selected from all ASCs, to obtain their 2004 cost and procedure data. We received response data from 397 ASC facilities. We assessed the reliability of these data through several means. We identified incomplete and inconsistent survey responses within individual surveys and placed follow-up calls to respondents to complete or verify their responses. To ensure that survey response data were accurately transferred to electronic files for our analytic purposes, two analysts independently entered all survey responses. Any discrepancies between the two sets of entered responses were resolved. We performed electronic testing for errors in accuracy and completeness, including an analysis of costs per procedure. As a result of our data reliability testing, we determined that data from 290 responding facilities were sufficiently reliable for our purposes. Our nonresponse analysis showed that there was no geographic bias among the facilities responding to our survey. The responding facilities performed more Medicare services than the average for all ASCs in our sample. To allocate ASCs’ total costs among the individual procedures they perform, we developed a method to allocate the portion of an ASC’s costs accounted for by each procedure. We constructed a relative weight scale for Medicare ASC-approved procedures that captures the general variation in resources associated with performing different procedures. The resources we used were the clinical staff time, surgical supplies, and surgical equipment used during the procedures. We used cost and quantity data on these resources from information CMS had collected for the purpose of setting the practice expense component of physician payment rates. For procedures for which CMS had no data on the resources used, we used information we collected from medical specialty societies and physicians who work for CMS. We summed the costs of the resources for each procedure and created a relative weight scale by dividing the total cost of each procedure by the average cost across all of the procedures. We assessed the reliability of these data through several means. We compared electronic CMS data with the original document sources for a large sample of records, performed electronic testing for errors in accuracy and completeness, and reviewed data for reasonableness. Based on these efforts, we determined that data were sufficiently reliable for our purposes. To calculate per-procedure costs with the data from the surveyed ASC facilities, we first deducted costs that Medicare considers unallowable, such as advertising and entertainment costs. (See fig. 3 for our per- procedure cost calculation methodology.) We also deducted costs for services that Medicare pays for separately, such as physician and nonphysician practitioner services. We then separated each facility’s total costs into its direct and indirect costs. We defined direct costs as those associated with the clinical staff, equipment, and supplies used during the procedure. Indirect costs included all remaining costs, such as support and administrative staff, building expenses, and outside services purchased. To allocate each facility’s direct costs across the procedures it performed, we applied our relative weight scale. We allocated indirect costs equally across all procedures performed by the facility. For each procedure performed by a responding ASC facility, we summed its allocated direct and indirect costs to determine a total cost for the procedure. To obtain a per-procedure cost across all ASCs, we arrayed the calculated costs for all ASCs performing that procedure and identified the median cost. To compare per-procedure costs for ASCs and hospital outpatient departments, we first obtained from CMS the list of ambulatory payment classification (APC) groups used for the outpatient prospective payment system (OPPS) and the procedures assigned to each APC group. We also obtained from CMS the OPPS median cost of each procedure and the median cost of each APC group. We then calculated a ratio between each procedure’s ASC median cost, as determined by the survey, and the median cost of each procedure’s corresponding APC group under the OPPS, referred to as the ASC-to-APC cost ratio. We also calculated a ratio between each ASC procedure’s median cost under the OPPS and the median cost of the procedure’s APC group, using the data obtained from CMS, referred to as the OPPS-to-APC cost ratio. To evaluate the difference in procedure costs between the two settings, we compared the ASC-to- APC and OPPS-to-APC cost ratios. To assess how well the relative costs of procedures in the OPPS, defined by their assignment to APC groups, reflect the relative costs of procedures in the ASC setting, we evaluated the distribution of the ASC-to-APC and OPPS-to-APC cost ratios. To calculate the percentage of labor-related costs among our sample ASCs, for each ASC, we divided total labor costs by total costs, after deducting costs not covered by Medicare’s facility payment. We then determined the range of the percentage of labor-related costs among all of our ASCs and between the 25th percentile and the 75th percentile, as well as the mean and median percentage of labor-related costs. We performed our work from April 2004 through October 2006 in accordance with generally accepted government auditing standards. Appendix III: Additional Procedures Billed with the Top 20 ASC Procedures, 2003 (percentage) N/A (percentage) In addition to the contact named above, key contributors to this report were Nancy A. Edwards, Assistant Director; Kevin Dietz; Beth Cameron Feldpush; Marc Feuerberg; and Nora Hoban.
Medicare pays for surgical procedures performed at ambulatory surgical centers (ASC) and hospital outpatient departments through different payment systems. Although they perform a similar set of procedures, no comparison of ASC and hospital outpatient per-procedure costs has been conducted. The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 directed GAO to compare the relative costs of procedures furnished in ASCs to the relative costs of those procedures furnished in hospital outpatient departments, in particular, how accurately the payment groups used in the hospital outpatient prospective payment system (OPPS) reflect the relative costs of procedures performed in ASCs. To do this, GAO collected data from ASCs through a survey. GAO also obtained hospital outpatient data from the Centers for Medicare & Medicaid Services (CMS). GAO determined that the payment groups in the OPPS, known as ambulatory payment classification (APC) groups, accurately reflect the relative cost of procedures performed in ASCs. GAO calculated the ratio between each procedure's ASC median cost, as determined by GAO's survey, and the median cost of each procedure's corresponding APC group under the OPPS, referred to as the ASC-to-APC cost ratio. GAO also compared the OPPS median costs of those same procedures with the median costs of their APC groups, referred to as the OPPS-to-APC cost ratio. GAO's analysis of the ASC-to-APC and OPPS-to-APC cost ratios showed that 45 percent of all procedures in the analysis fell within a 0.10 point range of the ASC-to-APC median cost ratio, and 33 percent of procedures fell within a 0.10 point range of the OPPS-to-APC median cost ratio. These similar patterns of distribution around the median show that the APC groups reflect the relative costs of procedures provided by ASCs as well as they reflect the relative costs of procedures provided in hospital outpatient departments and can be used as the basis for the ASC payment system. GAO's analysis also identified differences in the cost of procedures in the two settings. The median cost ratio among all ASC procedures was 0.39 and when weighted by Medicare claims volume was 0.84. The median cost ratio for OPPS procedures was 1.04. Thus, the cost of procedures in ASCs is substantially lower than the corresponding cost in hospital outpatient departments.
IRS’s mission is to provide America’s taxpayers top-quality service by helping them to understand and meet their tax responsibilities and to enforce the law with integrity and fairness to all. During fiscal year 2015, IRS collected more than $3.3 trillion; processed more than 243 million tax returns and other forms; and issued more than $403 billion in tax refunds. IRS employs about 90,000 people in its Washington, D.C., headquarters and at more than 550 offices in all 50 states, U.S. territories, and some U.S. embassies and consulates. Each filing season IRS provides assistance to tens of millions of taxpayers over the phone, through written correspondence, online, and face-to-face. The scale of these operations alone presents challenges. In carrying out its mission, IRS relies extensively on computerized information systems, which it must effectively secure to protect sensitive financial and taxpayer data for the collection of taxes, processing of tax returns, and enforcement of federal tax laws. Accordingly, it is critical for IRS to effectively implement information security controls and an agency- wide information security program in accordance with federal law and guidance. Cyber incidents can adversely affect national security, damage public health and safety, and compromise sensitive information. Regarding IRS specifically, two recent incidents illustrate the impact on taxpayer and other sensitive information: In June 2015, the Commissioner of the IRS testified that unauthorized third parties had gained access to taxpayer information from its Get Transcript application. According to officials, criminals used taxpayer- specific data acquired from non-department sources to gain unauthorized access to information on approximately 100,000 tax accounts. These data included Social Security information, dates of birth, and street addresses. In an August 2015 update, IRS reported this number to be about 114,000, and that an additional 220,000 accounts had been inappropriately accessed. In a February 2016 update, the agency reported that an additional 390,000 accounts had been accessed. Thus, about 724,000 accounts were reportedly affected. The online Get Transcript service has been unavailable since May 2015. In March 2016, IRS stated that as part of its ongoing security review, it had temporarily suspended the Identity Protection Personal Identification Number (IP PIN) service on IRS.gov. The IP PIN is a single-use identification number provided to taxpayers who are victims of identity theft (IDT) to help prevent future IDT refund fraud. The service on IRS’s website allowed taxpayers to retrieve their IP PINs online by passing IRS’s authentication checks. These checks confirm taxpayer identity by asking for personal, financial and tax-related information. The IRS stated that it was conducting further review of the IP PIN service and is looking at further strengthening the security features before resuming service. As of April 7, the online service was still suspended. The Commissioner of Internal Revenue has overall responsibility for ensuring the confidentiality, integrity, and availability of the information and systems that support the agency and its operations. Within IRS, the senior agency official responsible for information security is the Associate CIO, who heads the IRS Information Technology Cybersecurity organization. As we reported in March 2016, IRS has implemented numerous controls over key financial and tax processing systems; however, it had not always effectively implemented access and other controls, including elements of its information security program. Access controls are intended to prevent, limit, and detect unauthorized access to computing resources, programs, information, and facilities. These controls include identification and authentication, authorization, cryptography, audit and monitoring, and physical security controls, among others. In our most recent review we found that IRS had improved access controls, but some weaknesses remain. Identifying and authenticating users—such as through user account-password combinations—provides the basis for establishing accountability and controlling access to a system. IRS established policies for identification and authentication, including requiring multifactor authentication for local and network access accounts and establishing password complexity and expiration requirements. It also improved identification and authentication controls by, for example, expanding the use of an automated mechanism to centrally manage, apply, and verify password requirements. However, weaknesses in identification and authentication controls remained. For example, the agency used easily guessable passwords on servers supporting key systems. Authorization controls limit what actions users are able to perform after being allowed into a system and should be based on the concept of “least privilege,” granting users the least amount of rights and privileges necessary to perform their duties. While IRS established policies for authorizing access to its systems, it continued to permit excessive access in some cases. For example, users were granted rights and permissions in excess of what they needed to perform their duties, including for an application used to process electronic tax payment information and a database on a human resources system. Cryptography controls protect sensitive data and computer programs by rendering data unintelligible to unauthorized users and protecting the integrity of transmitted or stored data. IRS policies require the use of encryption and it continued to expand its use of encryption to protect sensitive data. However, key systems we reviewed had not been configured to encrypt sensitive user authentication data. Audit and monitoring is the regular collection, review, and analysis of events on systems and networks in order to detect, respond to, and investigate unusual activity. IRS established policies and procedures for auditing and monitoring its systems and continued to enhance its capability by, for example, implementing an automated mechanism to log user activity on its access request and approval system. But it had not established logging for two key applications used to support the transfer of financial data and access and manage taxpayer accounts; nor was the agency consistently maintaining key system and application audit plans. Physical security controls, such as physical access cards, limit access to an organization’s overall facility and areas housing sensitive IT components. IRS established policies for physically protecting its computer resources and physical security controls at its enterprise computer centers, such as a dedicated guard force at each of its computer centers. However, the agency had yet to address weaknesses in its review of access lists for both employees and visitors to sensitive areas. IRS also had weaknesses in configuration management controls, which are intended to prevent unauthorized changes to information system resources (e.g., software and hardware) and provide assurance that systems are configured and operating securely. Specifically, while IRS developed policies for managing the configuration of its information technology (IT) systems and improved some configuration management controls, it did not, for example, ensure security patch updates were applied in a timely manner to databases supporting 2 key systems we reviewed, including a patch that had been available since August 2012. To its credit, IRS had established contingency plans for the systems we reviewed, which help ensure that when unexpected events occur, critical operations can continue without interruption or can be promptly resumed, and that information resources are protected. Specifically, IRS had established policies for developing contingency plans for its information systems and for testing those plans, as well as for implementing and enforcing backup procedures. Moreover, the agency had documented and tested contingency plans for its systems and improved continuity of operations controls for several systems. Nevertheless, the control weaknesses can be attributed in part to IRS’s inconsistent implementation of elements of its agency-wide information security program. The agency established a comprehensive framework for its program, including assessing risk for its systems, developing system security plans, and providing employees with security awareness and specialized training. However, IRS had not updated key mainframe policies and procedures to address issues such as comprehensively auditing and monitoring access. In addition, the agency had not fully addressed previously identified deficiencies or ensured that its corrective actions were effective. During our most recent review, IRS told us it had addressed 28 of our prior recommendations; however, we determined that 9 of these had not been effectively implemented. The collective effect of the deficiencies in information security from prior years that continued to exist in fiscal year 2015, along with the new deficiencies we identified, are serious enough to merit the attention of those charged with governance of IRS and therefore represented a significant deficiency in IRS’s internal control over financial reporting systems as of September 30, 2015. To assist IRS in fully implementing its agency-wide information security program, we made two new recommendations to more effectively implement security-related policies and plans. In addition, to assist IRS in strengthening security controls over the financial and tax processing systems we reviewed, we made 43 technical recommendations in a separate report with limited distribution to address 26 new weaknesses in access controls and configuration management. Implementing these recommendations—in addition to the 49 outstanding recommendations from previous audits—will help IRS improve its controls for identifying and authenticating users, limiting users’ access to the minimum necessary to perform their job-related functions, protecting sensitive data when they are stored or in transit, auditing and monitoring system activities, and physically securing its IT facilities and resources. Table 1 below provides the number of our prior recommendations to IRS that were not implemented at the beginning of our fiscal year 2015 audit, how many were resolved by the end of the audit, new recommendations, and the total number of outstanding recommendations at the conclusion of the audit. In commenting on drafts of our reports presenting the results of our fiscal year 2015 audit, the IRS Commissioner stated that while the agency agreed with our new recommendations, it will review them to ensure that its actions include sustainable fixes that implement appropriate security controls balanced against IT and human capital resource limitations. In addition, IRS can take steps to improve its response to data breaches. Specifically, in December 2013 we reported on the extent to which data breach policies at eight agencies, including IRS, adhered to requirements and guidance set forth by the Office of Management and Budget and the National Institute of Standards and Technology. While the agencies in our review generally had policies and procedures in place that reflected the major elements of an effective data breach response program, implementation of these policies and procedures was not consistent. With respect to IRS, we determined that its policies and procedures generally reflected key practices, although the agency did not require considering the number of affected individuals as a factor when determining if affected individuals should be notified of a suspected breach. In addition, IRS did not document lessons learned from periodic analyses of its breach response efforts. We recommended that IRS correct these weaknesses, but the agency has yet to fully address them. The importance of protecting taxpayer information is further highlighted by the billions of dollars that have been lost to IDT refund fraud, which continues to be an evolving threat. IRS develops estimates of the extent of IDT refund fraud to help direct its efforts to identify and prevent the crime. While its estimates have inherent uncertainty, IRS estimated that it prevented or recovered $22.5 billion in fraudulent IDT refunds in filing season 2014 (see figure 1). However, IRS also estimated, where data were available, that it paid $3.1 billion in fraudulent IDT refunds. Because of the difficulties in knowing the amount of undetectable fraud, the actual amount could differ from these estimates. IRS has taken steps to address IDT refund fraud; however, it remains a persistent and continually changing threat. IRS recognized the challenge of IDT refund fraud in its fiscal year 2014-2017 strategic plan and increased resources dedicated to combating IDT and other types of refund fraud. In fiscal year 2015, IRS reported that it staffed more than 4,000 full-time equivalents and spent about $470 million on all refund fraud and IDT activities. As described above, IRS received an additional $290 million for fiscal year 2016 to improve customer service, IDT identification and prevention, and cybersecurity efforts and the agency plans to use $16.1 million of this funding to help prevent IDT refund fraud, among other things. The administration requested an additional $90 million and an additional 491 full-time equivalents for fiscal year 2017 to help prevent IDT refund fraud and reduce other improper payments. IRS estimates that this $90 million investment in IDT refund fraud and other improper payment prevention would help it protect $612 million in revenue in fiscal year 2017, as well as protect revenue in future years. IRS has taken action to improve customer service related to IDT refund fraud. For example, between the 2011 and 2015 filing seasons, IRS experienced a 430 percent increase in the number of telephone calls to its Identity Theft Toll Free Line—as of March 19, 2016, IRS had received over 1.1 million calls to this line. Moreover, 77 percent of callers seeking assistance on this telephone line received it compared to 54 percent during the same period last year. Average wait times during the same period have also decreased—taxpayers are waiting an average of 14 minutes to talk to an assistor, a decrease from 27 minutes last year. IRS also works with third parties, such as tax preparation industry participants, states, and financial institutions to try to detect and prevent IDT refund fraud. In March 2015, the IRS Commissioner convened a Security Summit with industry and states to improve information sharing and authentication. IRS officials said that 40 state departments of revenue and 20 tax industry participants have officially signed a partnership agreement to enact recommendations developed and agreed to by summit participants. IRS plans to invest a portion of the $16.1 million it received in fiscal year 2016 into identity theft prevention and refund fraud mitigation actions from the Security Summit. These efforts include developing an Information Sharing and Analysis Center where IRS, states, and industry can share information to combat IDT refund fraud. Even though IRS has prioritized combating IDT refund fraud, fraudsters adapt their schemes to identify weaknesses in IDT defenses, such as gaining access to taxpayers’ tax return transcripts through IRS’s online Get Transcript service. According to IRS officials, with access to tax transcripts, fraudsters can create historically consistent returns that are hard to distinguish from a return filed by a legitimate taxpayer, potentially making it more difficult for IRS to identify and detect IDT refund fraud. Without additional action by IRS and Congress, the risk of issuing fraudulent IDT refunds could grow. We previously made recommendations to IRS to help it better combat IDT refund fraud: Authentication. In January 2015, we reported that IRS’s authentication tools have limitations and recommended that IRS assess the costs, benefits and risks of its authentication tools. For example, individuals can obtain an e-file PIN by providing their name, Social Security number, date of birth, address, and filing status for IRS’s e-file PIN application. Identity thieves can easily find this information, allowing them to bypass some, if not all, of IRS’s automatic checks, according to our analysis and interviews with tax software and return preparer associations and companies. After filing an IDT return using an e-file PIN, the fraudulent return would proceed through IRS’s normal return processing. In November 2015, IRS officials told us that the agency had developed guidance for its Identity Assurance Office to assess costs, benefits, and risk, and that its analysis will inform decision-making on authentication-related issues. IRS also noted that the methods of analysis for the authentication tools will vary depending on the different costs and other factors for authenticating taxpayers in different channels, such as online, phone, or in-person. In February 2016, IRS officials told us that the Identity Assurance Office plans to complete a strategic plan for taxpayer authentication across the agency in September 2016. While IRS is taking steps, it will still be vulnerable until it completes and uses the results of its analysis of costs, benefits, and risk to inform decision-making. Form W-2, Wage and Tax Statement (W-2) Pre-refund Matching. In August 2014 we reported that the wage information that employers report on Form W-2 is not available to IRS until after it issues most refunds, and that if IRS had access to W-2 data earlier, it could match such information to taxpayers’ returns and identify discrepancies before issuing billions of dollars of fraudulent IDT refunds. We recommended that IRS assess the costs and benefits of accelerating W-2 deadlines. In response to our recommendation, IRS provided us with a report in September 2015 discussing (1) adjustments to IRS systems and work processes needed to use accelerated W-2 information, (2) the potential impacts on internal and external stakeholders, and (3) other changes needed to match W-2 data to tax returns prior to issuing refunds, such as delaying refunds until W-2 data are available. In December 2015, the Consolidated Appropriations Act of 2016 amended the tax code to accelerate W-2 filing deadlines to January 31. IRS’s report will help IRS determine how to best implement pre- refund W-2 matching, given the new January 31st deadline for filing W-2s. Additionally, we suggested that Congress should consider providing the Secretary of the Treasury with the regulatory authority to lower the threshold for electronic filing of W-2s, which could make more W-2 information available to IRS earlier. External Leads. IRS partners with financial institutions and other external parties to obtain information about emerging IDT refund trends and fraudulent returns that have passed through IRS detection systems. In August 2014, we reported that IRS provides limited feedback to external parties on IDT external leads they submit and offers external parties limited general information on IDT refund fraud trends and recommended that IRS provide actionable feedback to all lead generating third parties. In November 2015, IRS reported that it had developed a database to track leads submitted by financial institutions and the results of those leads. IRS also stated that it had held two sessions with financial institutions to provide feedback on external leads provided to IRS. In December 2015, IRS officials stated that the agency sent a customer satisfaction survey asking financial institutions for feedback on the external leads process and was considering other ways to provide feedback to financial institutions. In April 2016, IRS officials stated they plan to analyze preliminary survey results by mid-April 2016. Additionally, IRS officials reported that the agency shared information with financial institutions in March 2016 and plans to do so on a quarterly basis, with the next information sharing session scheduled in June 2016. IRS and industry partners have characterized that returns processing and refund issuance during this filing season has been generally smooth. Through April 1, 2016, IRS had processed about 95 million returns and issued 76 million refunds totaling about $215 billion. While IRS experienced a major system failure in February that halted returns processing for about a day, the agency reported that it had minimal effect on overall processing of returns and refunds. In addition to filing returns, many taxpayers often call IRS for assistance. IRS’s telephone service has generally improved in 2016 over last year. From January 1 through March 19, 2016 IRS received about 35.4 million calls to its automated and live assistor telephone lines, about a 2 percent decrease compared to the same period last year. Of the 13.4 million calls seeking live assistance, IRS had answered 9.1 million calls—a 75 percent increase over the 5.2 million calls answered during the same period last year. IRS anticipated that 65 percent of callers seeking live assistance would receive it this filing season, which runs through April 18, and 47 percent of callers would receive live assistance through the entire 2016 fiscal year. As of March 19, 2016, 75 percent of callers had received live assistance, an increase from 38 percent during the same period last year. Further, the average wait time to speak to an assistor also decreased from 24 to 9 minutes. As we reported in March 2016, however, IRS’s telephone level of service for the full fiscal year has yet to reach the levels it had achieved in earlier years. IRS attributed this year’s service improvement to a number of factors. Of the additional $290 million IRS received in December 2015, it allocated $178.4 million (61.5 percent) for taxpayer services to make measurable improvements in its telephone level of service. With the funds, IRS hired 1,000 assistors who began answering taxpayer calls in March, in addition to the approximately 2,000 seasonal assistors it had hired in fall 2015. To help answer taxpayer calls before March, IRS officials told us that they detailed 275 staff from one of its compliance functions to answer telephone calls. IRS officials said they believe this step was necessary because the additional funding came too late in the year to hire and train assistors to fully cover the filing season. IRS also plans to use about 600 full-time equivalents of overtime for assistors to answer telephone calls and respond to correspondence in fiscal year 2016, compared to fewer than 60 full-time equivalents of overtime used in fiscal year 2015. In December 2014, we recommended that IRS systematically and periodically compare its telephone service to the best in business to identify gaps between actual and desired performance. IRS disagreed with this recommendation, noting that it is difficult to identify comparable organizations. We do not agree with IRS’s position; many organizations run call centers that would provide ample opportunities to benchmark IRS’s performance. In fall 2015, Department of the Treasury (Treasury) and IRS officials said they had no plans to develop a comprehensive customer service strategy or specific goals for telephone service tied to the best in the business and customer expectations. Without such a strategy, Treasury and IRS can neither measure nor effectively communicate to Congress the types and levels of customer service taxpayers should expect and the resources needed to reach those levels. Therefore, in December 2015 we suggested that Congress consider requiring that Treasury work with IRS to develop a comprehensive customer service strategy. In April 2016, IRS officials told us that the agency established a team to consider our prior work in developing this strategy or benchmarking its telephone service. In summary, while IRS has made progress in implementing information security controls, it needs to continue to address weaknesses in access controls and configuration management and consistently implement all elements of its information security program. The risks IRS and the public are exposed to have been illustrated by recent incidents involving public- facing applications, highlighting the importance of securing systems that contain sensitive taxpayer and financial data. In addition, fully implementing key elements of a breach response program will help ensure that when breaches of sensitive data do occur, their impact on affected individuals will be minimized. Weaknesses in information security can also increase the risk posed by identity theft refund fraud. IRS needs to establish an approach for addressing identity theft refund fraud that is informed by assessing the cost, benefits, and risks of IRS’s various authentication options and improving the reliability of fraud estimates. While this year’s tax filing season has generally gone smoothly and IRS has improved customer service, it still needs to develop a comprehensive approach to customer service that will meet the needs of taxpayers while ensuring that their sensitive information is adequately protected. Chairman Hatch, Ranking Member Wyden, and Members of the Committee, this concludes my statement. I look forward to answering any questions that you may have at this time. If you have any questions regarding this statement, please contact Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov, Nancy Kingsbury at (202) 512-2928 or kingsburyn@gao.gov, or James R. McTigue, Jr. at (202) 512-9110 or mctiguej@gao.gov or Jessica K. Lucas-Judy at (202) 512-9110 or LucasJudyJ@gao.gov. Other key contributors to this statement include Jeffrey Knott, Neil A. Pinney, and Joanna M. Stamatiades (assistant directors); Dawn E. Bidne; Mark Canter; James Cook; Shannon J. Finnegan; Lee McCracken; Justin Palk; J. Daniel Paulk; Erin Saunders Rath; and Daniel Swartz. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In collecting taxes, processing returns, and providing taxpayer service, IRS relies extensively on computerized systems. Thus it is critical that sensitive taxpayer and other data are protected. Recent data breaches at IRS highlight the vulnerability of taxpayer information. In addition, identity theft refund fraud is an evolving threat to honest taxpayers and tax administration. This crime occurs when a thief files a fraudulent return using a legitimate taxpayer's identity and claims a refund. In 2015, GAO added identity theft refund fraud to its high-risk area on the enforcement of tax laws and expanded its government-wide high-risk area on federal information security to include the protection of personally identifiable information. This statement discusses (1) IRS information security controls over financial and tax processing systems, (2) IRS actions to address identity theft refund fraud, and (3) the status of selected IRS filing season operations. This statement is based on previously published GAO work as well as an update of selected data. In March 2016, GAO reported that the Internal Revenue Service (IRS) had instituted numerous controls over key financial and tax processing systems; however, it had not always effectively implemented other controls intended to properly restrict access to systems and information, among other security measures. In particular, while IRS had improved some of its access controls, weaknesses remained in key controls for identifying and authenticating users, authorizing users' level of rights and privileges, encrypting sensitive data, auditing and monitoring network activity, and physically securing facilities housing its information technology resources. These weaknesses were due in part to IRS's inconsistent implementation of its agency-wide security program, including not fully implementing prior GAO recommendations. GAO concluded that these weaknesses collectively constituted a significant deficiency for the purposes of financial reporting for fiscal year 2015. As a result, taxpayer and financial data continue to be exposed to unnecessary risk. Identity theft refund fraud also poses a significant challenge. IRS estimates it paid $3.1 billion in these fraudulent refunds in filing season 2014, while preventing $22.5 billion (see figure). The full extent is unknown because of the challenges inherent in detecting this form of fraud. IRS has taken steps to combat identity theft refund fraud such as improving phone service for taxpayers to report suspected identity theft and working with industry, states, and financial institutions to detect and prevent it. However, as GAO reported in August 2014 and January 2015, additional actions can further assist the agency in addressing this crime, including pre-refund matching of taxpayer returns with information returns from employers, and assessing the costs, benefits, and risks of improving methods for authenticating taxpayers. In addition, the Consolidated Appropriations Act 2016 includes a provision that would help IRS with pre-refund matching and also includes an additional $290 million to enhance cybersecurity, combat identity theft refund fraud, and improve customer service. According to IRS and industry partners, the 2016 filing season has generally gone smoothly, with about 95 million returns and $215 billion in refunds processed through April 1, 2016. In addition, IRS increased its level of phone service to taxpayers, although it has not developed a comprehensive strategy for customer service as GAO recommended in December 2015. In addition to 49 prior recommendations that had not been implemented, GAO made 45 new recommendations to IRS to further improve its information security controls and the implementation of its agency-wide information security program. GAO has also made recommendations to help IRS combat identity theft refund fraud, such as assessing costs, benefits, and risks of taxpayer authentication options.
In 1991, we reported that, historically, INS leadership had allowed INS’ organizational structure to become decentralized without adequate controls. Specifically, its regional structure had created geographical separation among INS programs and hampered resource allocation and consistent program implementation. The field structure designed to carry out INS’ enforcement functions was bifurcated between districts and Border Patrol sectors, resulting in uncoordinated, overlapping programs. In addition, only a single senior INS headquarters manager supervised INS’ 33 district directors and 21 Border Patrol chiefs. In 1994, with the appointment of a new Commissioner, INS implemented an organizational structure intended to remedy at least two problems. First, the headquarters operations office’s unrealistically large span of control resulting in uneven and poorly coordinated field performance. Second, the lack of focus on program planning resulting from the operations office’s preoccupation with matters that should have been handled by field managers. The Commissioner shifted some management authority to officials closer to field activities. While INS made some progress toward achieving its reorganization goals, its organizational structure is still in a state of flux and some problems persist. For example, in 1997 we reported that the responsibilities and authority of the Office of Field Operations and Office of Programs were unclear. We recommended, among other things, that the INS Commissioner provide written guidance on (1) the responsibilities and authorities of these two offices and (2) the appropriate coordination and communication methods between these two offices, and between the Office of Programs and field offices. Although INS has taken some steps to implement our 1997 recommendations, they have yet to be completed because, according to INS, these recommendations relate to INS restructuring that is currently under study. As previously mentioned, INS’ mission involves carrying out two primary functions—enforcing immigration laws and providing services or benefits to eligible legal immigrants. These functions often translate into competing priorities at the program level that need to be balanced for effective program implementation. All too often, the emphasis placed on one over the other results in ineffective enforcement or poor benefit delivery. An example of this inability to balance these priorities can be found in our September 2000 report on the processing of visas for specialty occupations, called H-1B visas. The performance appraisal process for staff that evaluates the merits of applications filed with INS (called adjudicators) focused mainly on the number of applications reviewed, not the quality of the review. INS rewarded those adjudicators who processed the greatest number of applications over those who processed fewer applications. Some adjudicators told us that because of pressure to adjudicate cases quickly, they did not routinely use investigations staff to look into potentially fraudulent applications because doing so would take more time and reduce the number of applications they could complete. INS investigators following up on approved applications found instances of fraud; for example, they found employers who created shell corporations and false credentials and documents for aliens ineligible for H-1B employment. We found other examples where the goal of providing timely service delivery has negatively impacted INS’ enforcement goal of providing benefits to only eligible aliens. In our May 2001 report on INS application processing, we stated that INS’ policy is to grant work authorization to applicants who file for adjustment of status to that of a permanent resident before it adjudicates their application. This policy is intended to prevent aliens from having to wait for INS to adjudicate their application before they can work. However, in fiscal year 2000 INS denied about 80,000 applicants for adjustment of status (about 14 percent of all the adjustment of status applications completed) and had to revoke their work authorization. Because these aliens had work authorization while waiting for their application to be processed, they could have developed a work history that may have facilitated their obtaining employment even after INS’ efforts to officially revoke their work authorization. A senior INS official stated that the policy to grant work authorization before the adjustment of status application is decided is intended to be fair to the majority of adjustment of status applicants who are approved. An investigation into INS’ initiative to process naturalization applications more quickly found the initiative to be fraught with quality and integrity problems resulting in ineligible applicants receiving citizenship. According to a Department of Justice Office of Inspector General (OIG) report on INS’ Citizenship USA initiative launched in 1995, INS made the timely completion of naturalization applications its guiding principle at the expense of accuracy and quality in determining eligibility. As a result of the problems found, INS instituted naturalization quality control procedures to enhance the integrity of the process. We are finding a similar situation in our ongoing review for this subcommittee of INS’ efforts to deter immigration benefit fraud. We will discuss this and other issues related to immigration benefit fraud in a report to be released later this year. Other researchers have also found that INS had difficulty in balancing its enforcement and service delivery priorities. For example, the Visa Waiver Program allows nationals of certain counties to enter the United States with just a passport. No visa is required. According to a Department of Justice OIG report, abuse of the program poses a threat to national security and increases illegal immigration. The report found that aliens used stolen passports from Visa Waiver countries to illegally enter the United States. In one case, the OIG found that 27 stolen Icelandic passports had been used to smuggle children into the United States.Although the passport numbers of the stolen Icelandic passports had been entered into a lookout database, INS airport inspectors were not entering the passport numbers of passengers arriving with Icelandic passports into the lookout database. INS officials told the OIG investigators that manually keying in these passport numbers into the system would take too long and would hamper INS’ ability to inspect all passengers from a flight within 45 minutes, as mandated by law. An INS contractor that evaluated INS’ immigration benefits process in 1999 found that INS needed to strengthen the integrity of the process. The study found that INS had no standard quality control program for ensuring that applications were processed consistently. Although some adjudicators believed the number of fraudulent applications submitted was significantly higher than the number they were detecting, they received little training in fraud detection. According to the report, some management and operations personnel indicated that performance evaluations in large part are based on the quantity of applications processed. The report concluded that whether employees receive incentives and rewards depends more on the quantity of applications processed rather than on fraud detection. Therefore, adjudicators had no incentives to actively search out fraud. As we reported in our applications processing report, despite these pressures to complete applications more quickly, INS’ backlog of applications increased to about 4 million applications by the end of fiscal year 2000, a four-fold increase since 1994. As of September 30, 2001 about 767,000 applicants out of almost 3 million with pending applications had been waiting at least 21 months for INS to process their application. In our 1997 management report, we found that poor communication was a problem, especially between headquarters and field units. For example, field and policy manuals were out of date and there was not one place that program staff could go for direction. Over one half of the employees we surveyed in preparing that report believed that INS had poor communications and that information was disseminated poorly. As noted earlier in our testimony, how INS’ Office of Programs and Office of Field Operations were to coordinate was still unclear. Our recent work shows that coordination and communication is still a problem. For example, although both the Border Patrol and INS’ Office of Investigations have anti-smuggling units that conduct alien smuggling investigations, these units operate through separate chains of command with different reporting structures. In May 2000, we reported that alien smuggling was a growing problem, and that the Border Patrol and Investigations anti-smuggling units operated autonomously, resulting in a lack of program coordination. Further, this lack of coordination sometimes led to different anti-smuggling units opening investigations on the same target. INS Investigations officials told us that the autonomy of the individual units and the lack of a single chain of command to manage INS’ anti-smuggling investigations were major obstacles to building a more effective anti-smuggling program. Communicating the necessary information to the appropriate individuals has also been a problem. In our H-1B report, we stated that adjudicators told us that they did not have easy access to case-specific information that would have helped them correctly decide whether an application should be approved or denied. For example, evidence of a fraudulent employer or falsified worker credentials either was not available to the adjudicator or could only be accessed through a time-consuming and complicated process. Consequently, a previously denied application could be resubmitted and approved by a different adjudicator. At the time of our review, INS officials told us that INS was in the process of upgrading the computer system that tracks H-1B applications, which could make more accurate and up to date information available on-line for adjudicators. Our work and the work of an INS contractor both found that INS did not have a structure in place to manage the information that adjudicators needed to make correct decisions. Information systems were not easily accessible to all adjudicators, so these systems were generally not queried as part of the adjudication process. INS had no single repository of information where adjudicators could find the most up to date information on such things as adjudication processes and legal and regulatory policies. In one case, the lack of communication and unclear policies and procedures had tragic consequences. In January 1999, police in Texas obtained a warrant for the arrest of Rafael Resendez-Ramirez, the “railway killer” who traveled around the United States by freight train and committed murders near railroad lines. In early 1999 police contacted INS Investigations staff in Houston Texas several times about placing a “border lookout” for Resendez-Ramirez in case he was apprehended at the border. According to a Department of Justice OIG report, none of the Investigations staff contacted by the police thought to inform the police about the existence of IDENT, INS’ automated fingerprint identification system. The Investigations staff also failed to enter a lookout in IDENT in case Resendez-Ramirez was apprehended trying to cross the border. On June 1, 1999, the Border Patrol apprehended Resendez-Ramirez trying to cross illegally and had him processed through the IDENT system. Because no border lookout had been placed, however, the Border Patrol voluntarily returned him to Mexico in accordance with standard Border Patrol practices. He subsequently returned illegally to the United States and committed four more murders before he was captured. INS’ Houston investigations staff provided OIG investigators with various reasons as to why they did not mention IDENT or its lookout capability to police or enter a lookout in IDENT, including the following: They were unfamiliar with IDENT and how it worked. They never received any IDENT training. They were unaware IDENT had a lookout feature. They thought IDENT was a system primarily for the Border Patrol to use. The OIG concluded that the lack of knowledge about IDENT was largely the result of broader problems in the way INS implemented and monitored IDENT. INS failed to (1) (1) ensure that components outside of the Border Patrol, such as Investigations, understood IDENT policies, particularly the lookout policy and (2) provide adequate IDENT training for all INS staff. INS and the FBI are currently working on integrating IDENT with the FBI’s automated fingerprint system to improve the quality and accuracy of criminal identification so that such mistakes can be averted in the future. Effective communication has also been a problem between INS and local communities. In August 2001, we reported that since 1994 as INS’ Border Patrol has increased enforcement efforts in certain locations as part of its strategy to deter illegal entry along the southwest border, illegal alien traffic shifted to other locations. Officials from some border communities told us that they were caught by surprise by the increase in the number of illegal aliens apprehended in their communities. INS has recognized the need to improve communications with the public regarding its strategy and its potential implications and has increased its outreach efforts. INS has had long-standing difficulty developing and fielding information systems to support its program operations. In 1990, we reported that INS managers and field officials did not have adequate, reliable, and timely information to effectively carry out the Service’s mission. We also reported that INS had not conducted a comprehensive agency-wide information needs assessment. As a result, program and management data were kept in a loose collection of automated systems as well as a number of ad-hoc labor-intensive manual systems. Effectively using information technology continues to remain a challenge for INS. In August 2000, we reported that INS did not have a “blueprint” to guide the development of its information systems. The absence of such a plan increases the risk that the information systems in which hundreds of millions of dollars are invested each year will not be well integrated or compatible and will not support mission needs. In December 2000, we reported that INS had limited capability to effectively manage its planned and ongoing information technology investments. While INS has some important information technology management capabilities in place, it has to do considerable work to fully implement mature and effective processes. The Department of Justice agreed with our recommendation that INS develop and submit a plan to Justice for implementing investment management process improvements. INS is in the process of developing this plan. The lack of adequate information technology systems has significantly impacted INS’ ability to perform its core missions. As we reported in our applications processing report, INS headquarters and field staff cited automation problems as the number one factor affecting INS’ ability to process applications in a timely manner to reduce backlogs. INS has no national case management system for applications filed at its 33 district offices. Most of these offices process applications manually. As a result, these offices cannot determine the number of pending cases, identify problem areas or bottlenecks, establish processing priorities, deploy staff based on workload, and ensure cases are processed in the order received. Due to the lack of any automated system, staff spend considerable time responding to applicants’ inquires on the status of their case, which takes time away from application processing. Existing INS systems used to process applications do not provide accurate and reliable data. In our applications processing report we stated that the system INS Service Centers use to process some applications frequently fails to operate and does not always update data to INS’ mainframe computer as it should. This lack of automation has resulted in INS expending considerable time and effort to obtain the data it needs. In our applications processing report we also stated that lack of reliable data was the primary reason INS undertook a time-consuming and costly hand-count of all pending applications in September 2000. INS undertook the hand-count to get an accurate count of pending applications hoping to obtain an unqualified opinion on its fiscal year 2000 financial statements. According to INS officials, the cost to complete this hand-count was high in terms of lost production and staff time. INS suspended nearly all case processing for 2-3 weeks. Due to the lack of accurate data in its computer systems, INS will have to do another hand-count of all pending applications at the end of fiscal year 2001 if it hopes to obtain an unqualified opinion on its financial statement. As a result of this lack of accurate data, INS has also approved more visas than the Congress has allowed. According to an INS contractor study, INS’ system that tracks these visas was not designed to keep a running total of the number of visas issued and to compare it against the annual limit to ensure that only the allowable number is approved. Consequently, in fiscal year 1999, INS approved approximately 137,000 to 138,000 H-1B visas, well over the 115,000 limit. Program management issues at INS have caused continuing concern. Our work indicates that INS needs to improve its program management in several fundamental areas, including having efficient processes and clear policies and procedures, providing adequate staff training, and aligning its workforce with its workload. The INS contractor study on immigration benefits processing found that INS’ processes were inefficient. For example, INS staff spends considerable time re-entering the same data into various INS computer systems. INS did not consistently adjudicate applications because the procedures used to process applications varied by office, most field offices allowed adjudicators to review cases using minimal guidelines, and standard quality controls were lacking. The study made numerous recommendations on how to make the processes more efficient and improve quality control. We stated in our applications processing report that INS was developing a strategic plan to reengineer applications processing. INS will make decisions regarding the contractor’s recommendations after completing two related strategic plans - the plan to reengineer applications processing and the information technology strategic plan. Both are in the early planning stages. INS estimated that it will take 5 years or more to develop and implement the reengineered processes and implement a service-wide automated system to process applications. Adequate staff training is also a critical aspect of program management. As noted earlier in our testimony, an INS contractor study found that INS adjudicators received little training in fraud detection. According to a November 2000 INS report prepared as part of INS’ Government Performance and Results Act reporting requirements, the INS workforce is not well supported in terms of training. Advanced training classes have been cut back or delayed. According to the report, because of the growing workforce and these training cutbacks, INS will have a larger portion of its workforce that is relatively inexperienced and inadequately trained for its work.
The Immigration and Naturalization's (INS) organizational structure has led to recurring management problems, including an inability to balance competing priorities, poor communications, and weaknesses in the development and fielding of critical information technology. Although restructuring may help, INS will still need to assemble the basic building blocks essential to any organization. These building blocks include clearly delineated roles and responsibilities, policies and procedures that effectively balance competing priorities, effective internal and external communication and coordination, and computer systems that provide accurate and timely information. Until these element are in place, it will be difficult to enforce the nation's immigration laws effectively.
While TCE and perchlorate are both DOD-classified emerging contaminants, there are key distinctions between the contaminants that affect the extent to which they are regulated, and the information that may be needed before further steps are taken to protect human health and the environment. Since 1989, a maximum contaminant level (MCL) under the Safe Drinking Water Act has been in place for TCE. In contrast, EPA has not adopted an MCL for perchlorate, although recent government- sponsored studies have raised concerns that even low-levels of exposure to perchlorate may pose serious risks to infants and fetuses of pregnant women. We provided details about EPA’s evolving standards for TCE and the evolving knowledge of its health effects in our May 2007 report and June 2007 testimony on issues related to drinking water contamination on Camp Lejeune. TCE is a colorless liquid with a sweet, chloroform-like odor that is used mainly as a degreaser for metal parts. The compound is also a component in adhesives, lubricants, paints, varnishes, paint strippers, and pesticides. At one time, TCE was used as an extraction solvent for cosmetics and drug products and as a dry-cleaning agent; however, its use for these purposes has been discontinued. DOD has used the chemical in a wide variety of industrial and maintenance processes. More recently, the department has used TCE to clean sensitive computer circuit boards in military equipment such as tanks and fixed wing aircraft. Because TCE is pervasive in the environment, most people are likely to be exposed to TCE by simply eating, drinking, and breathing, according to the Department of Health and Human Services’ Agency for Toxic Substances and Disease Registry (ATSDR). Industrial wastewater is the primary source of release of TCE into water systems, but inhalation is the main route of potential environmental exposure to TCE. ATSDR has also reported that TCE has been found in a variety of foods, with the highest levels in meats, at 12 to 16 ppb, and U.S. margarine, at 440 to 3,600 ppb. In fact, HHS’s National Health and Nutrition Examination Survey (NHANES) suggested that approximately 10 percent of the population had detectable levels of TCE in their blood. Inhaling small amounts of TCE may cause headaches, lung irritation, poor coordination, and difficulty concentrating, according ATSDR’s Toxicological Profile. Inhaling or drinking liquids containing high levels of TCE may cause nervous system effects, liver and lung damage, abnormal heartbeat, coma, or possibly death. ATSDR also notes that some animal studies suggest that high levels of TCE may cause liver, kidney, or lung cancer, and some studies of people exposed over long periods to high levels of TCE in drinking water or workplace air have shown an increased risk of cancer. ATSDR’s Toxicological Profile notes that the National Toxicology Program has determined that TCE is “reasonably anticipated to be a human carcinogen” and the International Agency for Research on Cancer has determined that TCE is probably carcinogenic to humans— specifically, kidney, liver and cervical cancers, Hodgkin’s disease, and non- Hodgkin’s lymphoma—based on limited evidence of carcinogenicity in humans and additional evidence from studies in experimental animals. Effective in 1989, EPA adopted an MCL of 5 ppb of TCE in drinking water supplies pursuant to the Safe Drinking Water Act. Despite EPA’s regulation of TCE as a drinking water contaminant, concerns over serious long-term effects associated with TCE exposures have prompted additional scrutiny by both governmental and nongovernmental scientific organizations. For example, ATSDR initiated a public health assessment in 1991 to evaluate the possible health risks from exposure to contaminated drinking water on Camp Lejeune. The health concerns over TCE have been further amplified in recent years after scientific studies have suggested additional risks posed by human exposure to TCE. ATSDR is continuing to develop information about the possible long-term health consequences of these potential exposures in a subregistry to the National Exposure Registry specifically for hazardous waste sites. As we previously reported with respect to Camp Lejeune, those who lived on base likely had a higher risk of inhalation exposure to volatile organic compounds such as TCE, which may be more potent than ingestion exposure. Thus, pregnant women who lived in areas of base housing with contaminated water and conducted activities during which they could inhale water vapor—such as bathing, showering, or washing dishes or clothing—likely faced greater exposure than those who did not live on base but worked on base in areas served by the contaminated drinking water. Concerns about possible adverse health effects and government actions related to the past drinking water contamination on Camp Lejeune have led to additional activities, including new health studies, claims against the federal government, and federal inquiries. As a consequence of these growing concerns—and of anxiety among affected communities about these health effects and related litigation—ATSDR has undertaken a study to examine whether individuals who were exposed in utero to the contaminated drinking water are more likely to have developed certain childhood cancers or birth defects. This research, once completed later in 2007, is expected to help regulators understand the effects of low levels of TCE in our environment. In addition, some former residents of Camp Lejeune have filed tort claims and lawsuits against the federal government related to the past drinking water contamination. As of June 2007, about 850 former residents and former employees had filed tort claims with the Department of the Navy related to the past drinking water contamination. According to an official with the U.S. Navy Judge Advocate General—which is handling the claims on behalf of the Department of the Navy—the agency is currently maintaining a database of all claims filed. The official said that the Judge Advocate General is awaiting completion of the latest ATSDR health study before deciding whether to settle or deny the pending claims in order to base its response on as much objective scientific and medical information as possible. According to DOD, any future reassessment of TCE toxicity may result in additional reviews of DOD sites that utilized the former TCE toxicity values, as the action levels for TCE cleanup in the environment may change. As we discussed in our May 2005 report and April 2007 testimony, EPA has not established a standard for limiting perchlorate concentrations in drinking water under the SDWA. Perchlorate has emerged as a matter of concern because recent studies have shown that it can affect the thyroid gland, which helps to regulate the body’s metabolism and may cause developmental impairments in the fetuses of pregnant women. Perchlorate is a primary ingredient in propellant and has been used for decades by the Department of Defense, the National Aeronautics and Space Administration, and the defense industry in manufacturing, testing, and firing missiles and rockets. Other uses include fireworks, fertilizers, and explosives. It is readily dissolved and transported in water and has been found in groundwater, surface water, drinking water, and soil across the country. The sources of perchlorate vary, but the defense and aerospace industries are the greatest known source of contamination. Scientific information on perchlorate was limited until 1997, when a better detection method became available for perchlorate, and detections (and concern about perchlorate contamination) increased. In 1998, EPA first placed perchlorate on its Contaminant Candidate List, the list of contaminants that are candidates for regulation, but the agency concluded that information was insufficient to determine whether perchlorate should be regulated under the SDWA. EPA listed perchlorate as a priority for further research on health effects and treatment technologies and for collecting occurrence data. In 1999, EPA required water systems to monitor for perchlorate under the Unregulated Contaminant Monitoring Rule to determine the frequency and levels at which it is present in public water supplies nationwide. Interagency disagreements over the risks of perchlorate exposure led several federal agencies to ask the National Research Council (NRC) of the National Academy of Sciences to evaluate perchlorate’s health effects. In 2005, NRC issued a comprehensive review of the health effects of perchlorate ingestion, and it reported that certain levels of exposure may not adversely affect healthy adults. However, the NRC-recommended more studies on the effects of perchlorate exposure in children and pregnant women and recommended a reference dose of 0.0007 milligrams per kilogram per day. In 2005, the EPA adopted the NRC recommended reference dose, which translates to a drinking water equivalent level (DWEL) of 24.5 ppb. If the EPA were to develop a drinking water standard for perchlorate, it would adjust the DWEL to account for other sources of exposure, such as food. Although EPA has taken some steps to consider a standard, in April 2007 EPA again decided not to regulate perchlorate—citing the need for additional research—and kept perchlorate on its Contaminant Candidate List. Several human studies have shown that thyroid changes occur in human adults at significantly higher concentrations than the amounts typically observed in water supplies. However, more recent studies have since provided new knowledge and raised concerns about potential health risks of low-level exposures, particularly for infants and fetuses. Specifically, in October 2006, researchers from the Centers for Disease Control and Prevention (CDC) published the results of the first large study to examine the relationship between low-level perchlorate exposure and thyroid function in women with lower iodine levels. About 36 percent of U.S. women have these lower iodine levels. The study found decreases in a thyroid hormone that helps regulate the body’s metabolism and is needed for proper fetal neural development. Moreover, in May 2007, FDA released a preliminary exposure assessment because of significant public interest in the issue of perchlorate exposure from food. FDA sampled and tested foods such as tomatoes, carrots, spinach, and cantaloupe; and other high water content foods such as apple and orange juices; vegetables such as cucumbers, green beans, and greens; and seafood such as fish and shrimp for perchlorate and found widespread low-level perchlorate levels in these items. FDA is also planning to publish, in late 2007, an assessment of exposure to perchlorate from foods, based on results from its fiscal year 2005-2006 Total Diet Study—a market basket study that is representative of the U.S. diet. Some federal funding has been directed to perchlorate studies and cleanup activities. For example, committee reports related to the DOD and EPA appropriations acts of fiscal year 2006 directed some funding for perchlorate cleanup. In the Senate committee report for the Department of Health and Human Services fiscal year 2006 appropriations act, the committee encouraged support for studies on the long-term effects of perchlorate exposure. The Senate committee report for FDA’s fiscal year 2006 appropriations act directed FDA to continue conducting surveys of perchlorate in food and bottled water and to report the findings to Congress. In the current Congress, legislation has been introduced that would require EPA to establish a health advisory for perchlorate, as well as requiring public water systems serving more than 10,000 people to test for perchlorate and disclose its presence in annual consumer confidence reports. Other pending legislation would require EPA to establish a national primary drinking water standard for perchlorate. DOD has certain responsibilities with regard to emerging contaminants such as TCE that are regulated by EPA or state governments, but its responsibilities and cleanup goals are less definite for emerging contaminants such as perchlorate that lack federal regulatory standards. As we have previously reported, DOD must comply with any cleanup standards and processes under all applicable environmental laws, regulations, and executive orders, including the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA), the Resource Conservation and Recovery Act (RCRA) and the Clean Water Act’s National Pollutant Discharge Elimination System (NPDES), and the SDWA. DOD’s designation of perchlorate as an emerging contaminant reflects the department’s recognition that the chemical has a significant potential impact on people or the Department’s mission. DOD’s recognition of a substance as an emerging contaminant can lead DOD to decide to take to certain cleanup efforts even in the absence of a federal regulatory standard. In addition, federal laws enacted in fiscal years 2004 and 2005 required DOD to conduct health studies and evaluate perchlorate found at military sites. For example, the Ronald W. Reagan National Defense Authorization Act for fiscal year 2005 stated that the Secretary of Defense should develop a plan for cleaning up perchlorate resulting from DOD activities when the perchlorate poses a health hazard and continue evaluating identified sites. As we reported in our 2005 perchlorate report, DOD has sometimes responded at the request of EPA and state environmental authorities— which have used a patchwork of statutes, regulations, and general oversight authorities—to act (or require others, including DOD, to act) when perchlorate was deemed to pose a threat to human health and the environment. For example, pursuant to its authority under the Clean Water Act’s NPDES program, Texas required the Navy to reduce perchlorate levels in wastewater discharges at the McGregor Naval Weapons Industrial Reserve Plant to 4 parts per billion, the lowest level at which perchlorate could be detected. Similarly, after sampling required as part of a RCRA permit detected perchlorate, Utah officials required ATK Thiokol, an explosives and rocket fuel manufacturer, to install a monitoring well to determine the extent of perchlorate contamination at their facility and take steps to prevent additional releases of perchlorate. In addition, EPA and state officials also told us during our 2005 review that they have sometimes used their general oversight responsibilities to protect water quality and human health to investigate and sample groundwater and surface water areas for perchlorate. For example, EPA asked Patrick Air Force Base and the Cape Canaveral Air Force Station, Florida, to sample groundwater for perchlorate near rocket launch sites. Previously, both installations had inventoried areas where perchlorate was suspected and conducted limited sampling. DOD officials did not find perchlorate at Patrick Air Force Base and, according to an EPA official, the Department of the Air Force said it would not conduct additional sampling at either installation until there was a federal standard for perchlorate. Finally, according to EPA, in the absence of a federal perchlorate standard, at least eight states have established nonregulatory action levels or advisories for perchlorate ranging from 1 part per billion to 51 parts per billion. (See table 1.) Massachusetts is the only state to have established a drinking water standard—set at 2 ppb. The California Department of Health Services reports that California will complete the rulemaking for its proposed standard of 6 ppb later this year. States have used these thresholds to identify the level at which some specified action must be taken by DOD and other facilities in their state, in the absence of a federal standard. For example, Oregon initiated in-depth site studies to determine the cause and extent of perchlorate contamination when concentrations of 18 ppb or greater are found. Nevada required the Kerr-McGee Chemical site in Henderson to treat groundwater and reduce perchlorate concentration releases to 18 ppb, which is Nevada’s action level for perchlorate. Utah officials told us that while the state did not have a written action level for perchlorate, it may require the responsible party to undertake cleanup activities if perchlorate concentrations exceed 18 ppb. DOD is undertaking a number of activities to address emerging contaminants in general, including the creation of the Materials of Evolving Regulatory Interest Team (MERIT) to systematically address the health, environmental, and safety concerns associated with emerging contaminants. As noted above, DOD is required to follow EPA regulations for monitoring and cleanup of TCE. In addition, DOD is working with ATSDR, which has projected a December 2007 completion date for its current study of TCE’s health effects on pregnant women and their children. In the absence of a federal standard, DOD has adopted its own perchlorate policies for sampling and cleanup activities or is working under applicable state guidelines. DOD created MERIT to help address the health, environmental, and safety concerns associated with emerging contaminants. According to DOD, MERIT has focused on materials that have been or are used by DOD, or are under development for use, such as perchlorate, TCE, RDX, DNT and new explosives, naphthalene, perfluorooctanoic acid (PFOA), hexavalent chromium (i.e., chromium VI), beryllium, and nanomaterials. MERIT’s initiatives include pollution prevention, detection/analytical methods, human health studies, treatment technologies, lifecycle cost analysis, risk assessment and risk management, and public outreach. Another of MERIT’s activities was to create an Emerging Contaminant Action List of materials that DOD has assessed and judged to have a significant potential impact on people or DOD’s mission. The current list includes five contaminants—perchlorate, TCE, RDX, naphthalene, and hexavalent chromium. To be placed on the action list, the contaminant will generally have been assessed by MERIT for its impacts on (1) environment, safety, and health (including occupational and public health), (2) cleanup efforts, (3) readiness and training, (4) acquisition, and (5) operation and maintenance activities. In 1979, EPA issued nonenforceable guidance establishing “suggested no adverse response levels” for TCE in drinking water. These levels provided EPA’s estimate of the short- and long-term exposure to TCE in drinking water for which no adverse response would be observed and described the known information about possible health risks for these chemicals. However, the guidance for TCE did not suggest actions that public water systems should take if TCE concentrations exceeded those values. Subsequently, in 1989, EPA set an enforceable MCL for TCE of 5 micrograms per liter, equivalent to 5 ppb in drinking water. The new standard served as a regulatory basis for many facilities to take concrete action to measure and control TCE. According to EPA’s Region 4 Superfund Director, for example, 46 sites on Camp Lejeune have since been identified for TCE cleanup. The Navy and EPA have selected remedies for 30 of those sites, and the remaining 16 are under active investigation. The first Record of Decision was signed in September 1992 and addressed contamination of groundwater in the Hadnot Point Area, one of Camp Lejeune’s water systems. Remedies to address groundwater contamination include groundwater “pump and treat” systems, in-situ chemical oxidation, and monitored natural attenuation. DOD contends that it is aggressively treating TCE as part of its current cleanup program. It notes that the department uses much less TCE than in the past and requires strict handling procedures and pollution prevention measures to prevent exposure to TCE and the release of TCE into the environment. Specifically, DOD has replaced products containing TCE with other types of cleaning agents such as citrus-based agents, mineral oils and other non-toxic solutions. In the absence of a federal perchlorate standard, DOD has adopted its own policies with regard to sampling and cleanup. The 2003 Interim Policy on Perchlorate Sampling required the military services—Army, Navy, Air Force, and Marines—to sample on active installations (1) where a reasonable basis existed to suspect that a perchlorate release occurred as a result of DOD activities, and (2) a complete human exposure pathway likely existed or (3) where a particular installation must do so under state laws or applicable federal regulations such as the NPDES permit program. However, DOD’s interim policy on perchlorate did not address cleanup responsibilities nor did it address contamination at closed installations. As we detailed in our previous work, DOD only sampled for perchlorate on closed installations when requested by EPA or a state agency, and only cleaned up active and closed installations when required by a specific environmental law, regulation, or program such as the environmental restoration program at formerly used defense sites. For example, at EPA’s request, the U.S. Army Corps of Engineers (Corps) installed monitoring wells and sampled for perchlorate at Camp Bonneville, a closed installation near Vancouver, Washington. Utah state officials also reported to us that DOD removed soil containing perchlorate at the former Wendover Air Force Base in Utah, where the Corps found perchlorate in 2004. However, as we previously reported, DOD cited reluctance to sample on or near active installations because of the lack of a federal regulatory standard for perchlorate. In the absence of a federal standard, DOD has also worked with individual states on perchlorate sampling and cleanup. For example, in October 2004, DOD and California agreed to prioritize perchlorate sampling at DOD facilities in California, including identifying and prioritizing the investigation of areas on active installations and military sites (1) where the presence of perchlorate is likely based on previous and current defense-related activities and (2) near drinking water sources where perchlorate was found. In January 2006, DOD updated its policy with the issuance of its Policy on DOD Required Actions Related to Perchlorate. The new policy applies broadly to DOD’s active and closed installations and formerly used defense sites within the United States, its territories and possessions. It directs DOD to test for perchlorate and take certain cleanup actions. The policy also acknowledges the importance of EPA direction in driving DOD’s response to emerging contaminants. It stated, for example, that its adoption of 24 ppb as the current level of concern for managing perchlorate was in response to EPA’s adoption of an oral reference dose that translates to a Drinking Water Equivalent Level of 24.5 ppb. The policy also states that when EPA or the states adopt standards for perchlorate, “DOD will comply with applicable state or federal promulgated standards whichever is more stringent.” The 2006 policy directs DOD to test for perchlorate when it is reasonably expected that a release has occurred. If perchlorate levels exceed 24 ppb, a site-specific risk assessment must be conducted. When an assessment indicates that the perchlorate contamination could result in adverse health effects, the site must be prioritized for risk management. DOD uses a relative-risk site evaluation framework across DOD to evaluate the risks posed by one site relative to other sites and to help prioritize environmental restoration work and to allocate resources among sites. The policy also directs DOD’s service components to program resources to address perchlorate contamination under four DOD programs— environmental restoration, operational ranges, DOD-owned drinking water systems, and DOD wastewater effluent discharges. Under the 2006 perchlorate policy, DOD has sampled drinking water, groundwater, and soil where the release of perchlorate may result in human exposure and responded where it has deemed appropriate to protect public health. As we have reported, DOD is responsible for a large number of identified sites with perchlorate contamination, and the department has allotted significant resources to address the problem. According to DOD, sampling for perchlorate has occurred at 258 active DOD installations or facilities. Through fiscal year 2006, DOD reported spending approximately $88 million on perchlorate-related research activities, including $60 million for perchlorate treatment technologies, $9.5 million on health and toxicity studies, and $11.6 million on pollution prevention. Additional funds have been spent on testing technology and cleanup. DOD also claims credit for other efforts, including strict handling procedures to prevent the release of perchlorate into the environment and providing information about perchlorate at DOD facilities and DOD’s responses. For example, DOD posts the results of its perchlorate sampling, by state, on MERIT’s Web site. As we have previously reported, DOD must comply with cleanup standards and processes under applicable laws, regulations and executive orders, including EPA drinking water standards and state-level standards. In the absence of a federal perchlorate standard, DOD has also initiated perchlorate response actions to clean up perchlorate contamination at several active and formerly used defense sites under its current perchlorate policy. For example, at Edwards Air Force Base in California, DOD has treated 32 million gallons of ground water under a pilot project for contaminants that include perchlorate. In addition, DOD has removed soil and treated groundwater at the Massachusetts Military Reservation and Camp Bonneville in Washington State. In conclusion, Mr. Chairman, DOD faces significant challenges, and potentially large costs, in addressing emerging contaminants, particularly in light of the scientific developments and regulatory uncertainties surrounding these chemicals and materials. To help address them, DOD recently identified five emerging contaminants for which it is developing risk management options. As in the case of TCE, DOD took action to address contamination after EPA established an MCL in 1989. DOD has stated that further efforts to address perchlorate would require a regulatory standard from EPA and/or the states. The fact that some states have moved to create such standards complicates the issue for DOD by presenting it with varying cleanup standards across the country. As the debate over a federal perchlorate standard continues, the recently- issued health studies from CDC and FDA may provide additional weight to the view that the time for such a standard may be approaching. Until one is adopted, DOD will continue to face the challenges of differing regulatory requirements in different states and continuing questions about whether its efforts to control perchlorate contamination are necessary or sufficient to protect human health. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Subcommittee may have at this time. For further information about this testimony, please contact John Stephenson at (202) 512-3841 or stephensonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Contributors to this testimony include Steven Elstein, Assistant Director and Terrance Horner, Senior Analyst. Marc Castellano, Richard Johnson, and Alison O’Neill also made key contributions. Defense Health Care: Issues Related To Past Drinking Water Contamination at Marine Corps Base Camp Lejeune, GAO-07-933T (June 12, 2007). Defense Health Care: Activities Related To Past Drinking Water Contamination at Marine Corps Base Camp Lejeune, GAO-07-276 (May 11, 2007). Perchlorate: EPA Does Not Systematically Track Incidents of Contamination, GAO-07-797T (April 25, 2007). Environmental Information: EPA Actions Could Reduce the Availability Of Environmental Information To The Public, GAO-07-464T (February 6, 2007). Military Base Closures: Opportunities Exist to Improve Environmental Cleanup Cost Reporting and to Expedite Transfer of Unneeded Property, GAO-07-166 (January 30, 2007). Perchlorate: A System to Track Sampling and Cleanup Results Is Needed, GAO-05-462 (May 20, 2005). Military Base Closures: Updated Status of Prior Base Realignments and Closures, GAO-05-138 (January 13, 2005). Environmental Contamination: DOD Has Taken Steps To Improve Cleanup Coordination At Former Defense Sites But Clearer Guidance Is Needed To Ensure Consistency, GAO-03-146 (March 28, 2003). This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
DOD defines emerging contaminants as chemicals or materials with (1) perceived or real threat to health or the environment and (2) lack of published standards or a standard that is evolving or being reevaluated. Two emerging contaminants--trichloroethylene (TCE) and perchlorate--are of particular concern to DOD because they have significant potential to impact people or DOD's mission. TCE, a degreasing agent in metal cleaning which has been used widely in DOD industrial and maintenance processes, has been documented at low exposure levels to cause headaches and difficulty concentrating. High-level exposure may cause dizziness, headaches, nausea, unconsciousness, cancer, and possibly death. Similarly, perchlorate has been used by DOD, NASA, and others in making, testing, and firing missiles and rockets. It has been widely found in groundwater, surface water, and soil across the United States, Perchlorate health studies have documented particular risks to fetuses of pregnant women. GAO was asked for testimony to summarize its past work on perchlorate-, TCE-, and defense-activities related to (1) the state of knowledge about the emerging contaminants TCE and perchlorate, (2) DOD responsibilities for managing TCE and perchlorate contamination at its facilities, and (3) DOD activities to address TCE and perchlorate contamination. While TCE and perchlorate are both classified by DOD as emerging contaminants, there are important distinctions in how they are regulated and in what is known about their health and environmental effects. Since 1989, EPA has regulated TCE in drinking water. However, health concerns over TCE have been further amplified in recent years after scientific studies have suggested additional risks posed by human exposure to TCE. Unlike TCE, no drinking water standard exists for perchlorate--a fact that has caused much discussion in Congress and elsewhere. Recent Food and Drug Administration data documenting the extent of perchlorate contamination in the nation's food supply has further fueled this debate. While DOD has clear responsibilities to address TCE because it is subject to EPA's regulatory standard, DOD's responsibilities are less definite for perchlorate due to the lack of such a standard. Nonetheless, perchlorate's designation by DOD as an emerging contaminant has led to some significant control actions. These actions have included responding to requests by EPA and state environmental authorities, which have used a patchwork of statutes, regulations, and general oversight authorities to address perchlorate contamination. Pursuant to its Clean Water Act authorities, for example, Texas required the Navy to reduce perchlorate levels in wastewater discharges at the McGregor Naval Weapons Industrial Reserve Plant to 4 parts per billion (ppb), the lowest level at which perchlorate could be detected at the time. In addition, in the absence of a federal perchlorate standard, at least nine states have established nonregulatory action levels or advisories for perchlorate ranging from 1 ppb to 51 ppb. Nevada, for example, required the Kerr-McGee Chemical site in Henderson to treat groundwater and reduce perchlorate releases to 18 ppb, which is Nevada's action level for perchlorate. While nonenforceable guidance had existed previously, it was not until EPA adopted its 1989 TCE standard that many DOD facilities began to take concrete action to control the contaminant. According to EPA, for example, 46 sites at Camp Lejeune have since been identified for TCE cleanup. The Navy and EPA have selected remedies for 30 of those sites, and the remaining 16 are under active investigation. Regarding perchlorate, in the absence of a federal standard DOD has implemented its own policies on sampling and cleanup, most recently with its 2006 Policy on DOD Required Actions Related to Perchlorate. The policy applies broadly to DOD's active and closed installations and formerly used defense sites within the United States and its territories. It requires testing for perchlorate and certain cleanup actions and directs the department to comply with applicable federal or state promulgated standards, whichever is more stringent. The policy notes, that DOD has established 24 ppb as the current level of concern for managing perchlorate until the promulgation of a formal standard by the states and/or EPA.
As you know, the cost of the Decennial Census has steadily increased during the past 40 years, in part because the nation’s population has steadily grown larger, more diverse, and increasingly difficult to enumerate. For example, at about $13 billion, the 2010 Census was the costliest U.S. census in history and was 56 percent more costly than the $8.1 billion 2000 Census (in constant 2010 dollars). To help save costs, in preparing for the 2020 Census, the Bureau has been researching and testing new methods and technologies to redesign the Census to more cost-effectively count the population while maintaining high-quality results. The Bureau’s research and testing has focused on four redesign areas: Reengineering address canvassing: This involves reengineering processes for updating the Bureau’s address list and maps of the nation to reduce the need for employing field staff to walk every street in the nation to verify addresses. Optimizing self-response: Includes efforts to maximize the self- response of households by, among other things, offering an Internet response option. As we have previously reported, to deliver the Internet response option, the Bureau would need to, among other things, design and develop an Internet response application, develop and acquire the IT infrastructure to support a large volume of data processing and storage, and plan communication and outreach strategies to motivate households to respond via the Internet. Using administrative records: This includes expanding the use of data previously obtained by other federal and state government agencies and commercial sources to reduce the need for costly and labor-intensive follow-up work. My colleague will address the Bureau’s progress on using administrative records in his statement today. Reengineering field operations: This includes reducing the number of visits to households, automating the management of enumerator work to conduct non-response follow-up, and automating and optimizing case assignment and routing for enumerators to reduce the staffing, infrastructure, and field offices required for the 2020 Census. The Bureau has conducted several major field tests to examine the potential for each of these redesign areas: In mid-2014 the Bureau conducted the 2014 Census Test in the Maryland and Washington, D.C., areas to test new methods for conducting self-response and non-response follow-up. In early 2015 the Bureau completed the Address Validation Test, which was used to examine new methods for updating the Bureau’s address list. In mid-2015 the Bureau conducted the 2015 Census Test in Arizona to test, among other things, the use of a field operations management system to automate data collection operations and provide real-time data and the ability to reduce the non-response follow-up workload using data previously provided to the government, as well as enabling enumerators to use their personally owned mobile devices to collect census data. Also in mid-2015, the Bureau conducted an optimizing self-response test in Savannah, Georgia, and the surrounding area, which was intended to further explore methods of encouraging households to respond using the Internet, such as using advertising and outreach to motivate respondents, and enabling households to respond without a Bureau-issued identification number. More recently, the Bureau began its National Content Test, which is currently ongoing and intended to, among other things, continue to test self-response modes and contact strategies and refine estimates of national self-response and Internet response rates. These tests were intended to inform the first version of the Bureau’s 2020 Census Operational Plan, which is intended to outline design decisions that drive how the 2020 Census will be conducted. As part of these decisions, the Bureau has committed to aspects of the 2020 Census redesign. The operational plan articulated 326 total design decision points, which vary widely in their complexity, importance, and urgency. As of October 6, 2015, the Bureau had made decisions for about 47 percent of them related to each of the four redesign areas. For example, the Bureau has decided to conduct 100 percent of address canvassing (i.e., identifying all addresses where people could live) in the office, and target a subset of up to 25 percent for in-the-field address canvassing; offer an Internet self-response option, as well as alternative response options via telephone and paper for limited circumstances; allow people to respond without a unique census identification use mobile devices for enumerators to conduct field data collection; use administrative records to enumerate vacant units; use enterprise solutions to support the 2020 Census, when practicable; and reduce the field footprint by half in comparison to the 2010 Census (e.g., 6 regional census centers instead of 12 and up to 250 field offices instead of nearly 500). Figure 1 provides an overview of the Bureau’s current plans and assumptions for the 2020 Census, resulting from the October 2015 operational plan. As a result of these decisions, the Bureau estimates saving $5.2 billion. Specifically, the Bureau estimated that repeating the design of the 2010 Census for 2020 would cost approximately $17.8 billion (in constant 2020 dollars), while successfully implementing the four redesign areas is expected to result in an overall 2020 Census cost of $12.5 billion (in constant 2020 dollars). Table 1 illustrates the estimated cost savings associated with each redesign area. Moving forward, the Bureau plans to conduct additional research and testing and further refine the design through 2018. By August 2017, the Bureau plans to begin preparations for end-to-end testing, which is intended to test all systems and operations to ensure readiness for the 2020 Census. Figure 2 shows the timeline for planned 2020 Census research and testing. Concurrent with redesigning the decennial census, the Bureau has also begun a significant effort to modernize and consolidate its survey data collection and processing functions. This is being undertaken through an enterprise-wide IT initiative called Census Enterprise Data Collection and Processing (CEDCAP). This initiative is a large and complex modernization program intended to deliver a system-of-systems for all the Bureau’s survey data collection and processing functions—rather than continuing to rely on unique, survey-specific systems with redundant capabilities. For the 2020 Census, CEDCAP is expected to deliver the systems and IT infrastructure needed to implement the Bureau’s redesign areas. For example: To reengineer field work, CEDCAP is expected to implement a new dynamic operational control system to track and manage field work. This system is to be able to make decisions about which visits enumerators should attempt on a daily basis using real-time data, as well as provide automated route planning to make enumerator travel more efficient. CEDCAP also includes testing the use of mobile devices, either government-furnished or employee-owned, to automate data collection in the field. To maximize self-response with the Internet response option, CEDCAP is responsible for developing and testing a web-based survey application and exploring options for establishing the IT infrastructure to support the increased volume of data processing and storage that will be needed. CEDCAP consists of 12 projects that are to deliver capabilities incrementally, over the course of at least 10 releases. The Bureau plans to roll out capabilities for the 2020 Census incrementally through 6 of these releases, while also deploying capabilities for other surveys such as the American Community Survey and Economic Census. The Bureau expects to reuse selected systems, make modifications to other systems, and develop or acquire additional systems and infrastructure. As of August 2015, the CEDCAP program was projected to cost about $548 million through 2020. However, the Bureau’s past efforts to implement new approaches and systems have not always gone well. As one example, during the 2010 Census, the Bureau planned to use handheld mobile devices to support field data collection for the census, including following up with nonrespondents. However, due to significant problems identified during testing of the devices, cost overruns, and schedule slippages, the Bureau decided not to use the handheld devices for non-response follow-up and reverted to paper-based processing, which increased the cost of the 2010 Census by up to $3 billion and significantly increased its risk as it had to switch its operations to paper-based operations as a backup. Last month’s issuance of the 2020 Census Operational Plan, which documents many key decisions about the redesign of the 2020 Census, represents progress; however, the Bureau faces critical challenges in delivering the IT systems needed to support the redesign areas. Specifically, with preparations for end-to-end testing less than 2 years away, the window to implement CEDCAP, which is intended to be the backbone of the 2020 Census, is narrow. Additionally, while the Bureau has demonstrated improvements in IT management, as we have previously reported, it faces critical gaps in its IT workforce planning and information security. Until it takes actions we have previously recommended to address these challenges, the Bureau is at risk of cost overruns, schedule delays, and performance shortfalls, which will likely diminish the potentially significant cost savings that it estimates will result from redesigning the census for 2020. The Bureau has not prioritized key IT-related decisions, which is a trend we have reported for the past few years. Specifically, in April 2014, we reported the Bureau had not prioritized key IT research and testing needed for the design decisions planned for the end of 2015. In particular, the Bureau had not completed the necessary plans and schedules for research and testing efforts and had not prioritized what needed to be done in time for the 2015 design decisions—a milestone that had already been pushed back by a year (see fig. 3). We concluded that, given the current trajectory and the lack of supporting schedules and plans, it was unlikely that all planned IT-related research and testing activities would be completed in time to support the 2015 design decisions—which ultimately came to fruition (as discussed later). In light of these ongoing challenges, we recommended in our April 2014 report that the Bureau prioritize its IT-related research and testing projects that need to be completed to support the design decisions and develop schedules and plans to reflect the new prioritized approach. The Bureau agreed with our recommendations and has taken steps to address them. For example, in September 2014, the Bureau released a plan that identified inputs, such as research questions, design components, and testing, that were needed to inform the operational design decisions expected in the fall of 2015. However, as we reported in February 2015, the Bureau had not yet determined how key IT research questions that had been identified as critical inputs into the design decisions—estimating the Internet self- response rate and determining the IT infrastructure for security and scalability needed to support Internet response—were to be answered. We therefore recommended that the Bureau, among other things, develop methodologies and plans for answering key IT-related research questions in time to inform key design decisions. While the recent 2020 Census Operational Plan documents many key IT- related decisions about the redesign of the census, other critical questions, including the ones identified in our February 2015 report, remain unanswered. Of greater concern, the Bureau does not intend to answer these and other questions until 2016 through 2018. Specifically, there are several significant IT decisions that are being deferred, which have implications on the CEDCAP program’s ability to have production- ready systems in place in time to conduct end-to-end testing. For example, the Bureau does not plan to decide on the projected demand that the IT infrastructure and systems would need to accommodate or whether the Bureau will build or buy the needed systems until June 2016, at the earliest; the high-level design and description of the systems (referred to as the solutions architecture) until September 2016—leaving about a year to, among other things, build or acquire, integrate, and test the systems that are intended to serve as the backbone to the 2020 Census before preparations for end-to-end testing begins in August 2017; and the strategy for the use of mobile devices for field work until October 2017. Figure 4 illustrates several key IT-related decisions that have been deferred which will impact preparations for the end-to-end test and 2020 Census. Unless the Bureau makes these key decisions soon, it will likely run out of time to put CEDCAP systems in place. Institutionalizing key IT management controls, such as IT governance, system development methodology, and requirements management processes, helps establish a consistent and repeatable process for managing and overseeing IT investments and reduces the risk of experiencing cost overruns, schedule slippages, and performance shortfalls, like those that affected the previous census. However, in September 2012, we reported that the Bureau lacked a sufficiently mature IT governance process to ensure that investments are properly controlled and monitored, did not have a comprehensive system development methodology, and continued to have long-standing challenges in requirements management. We made several recommendations to address these issues, and the Bureau took actions to fully implement each of the recommendations. For example, the Bureau addressed gaps in policies and procedures related to IT governance, such as establishing guidelines on the frequency of investment review board meetings and thresholds for escalation of cost, risk, or impact issues; finalized its adoption of an enterprise system development life-cycle methodology, which included the short incremental development model, referred to as Agile, and a process for continuously improving the methodology based on lessons learned; and implemented a consistent requirements development tool that includes guidance for developing requirements at the strategic mission, business, and project levels and is integrated with its enterprise system development life-cycle methodology. As a result, the Bureau has established a consistent process for managing and overseeing its IT investments. Effective workforce planning is essential to ensure organizations have the proper skills, abilities, and capacity for effective management. While the Bureau has made progress in IT workforce planning efforts, many critical IT competency gaps remain to be filled. In September 2012 we reported, among other things, that the Bureau had not developed a Bureau-wide IT workforce plan; identified gaps in mission-critical IT occupations, skills, and competencies; or developed strategies to address gaps. Accordingly, we recommended that the Bureau establish a repeatable process for performing IT skills assessments and gap analyses and establish a process for directorates to coordinate on IT workforce planning. In response, in 2013 the Bureau completed an enterprise-wide competency assessment and identified several mission-critical gaps in technical competencies. In 2014, the Bureau established documents to institutionalize a strategic workforce planning process, identified actions and targets to close the competency gaps by December 2015, and established a process to monitor quarterly status reports on the implementation of these actions. However, as we reported in February 2015, while these are positive steps in establishing strategic workforce planning capabilities, the Bureau’s workforce competency assessment identified several mission-critical gaps that would challenge its ability to deliver IT-related initiatives, such as the IT systems that are expected to be delivered by CEDCAP. For example, the Bureau found that competency gaps existed in cloud computing, security integration and engineering, enterprise/mission engineering life- cycle, requirements development, and Internet data collection. The Bureau also found that enterprise-level competency gaps existed in program and project management, budget and cost estimation, systems development, data analytics, and shared services. The Bureau has taken steps to regularly monitor and report on the status of its efforts to close competency gaps and has completed several notable actions. For example, in August 2015, the Bureau filled the position of Decennial IT Division Chief and in September 2015 awarded an enterprise-wide IT services contract for systems engineering and integration support. However, more work remains for the Bureau to close competency gaps critical to the implementation of its IT efforts. Most significantly, in July 2015, the Chief Information Officer resigned. As of October 2015, the Bureau was working to fill that gap and had an acting Chief Information Officer temporarily in the position. Additionally, there are other gaps in key positions, such as the Chief of the Office of Information Security and Deputy Chief Information Security Officer, Big Data Center Chief, Chief Cloud Architect, and the CEDCAP Assistant Chief of Business Integration, who is responsible for overseeing the integration of schedule, risks, and budget across the 12 projects. According to Bureau officials, they are working to address these gaps. Critical to the Bureau’s ability to perform its data collection and analysis duties are its information systems and the protection of the information they contain. A data breach could result in the public’s loss of confidence in the Bureau, thus affecting its ability to collect census data. To ensure the reliability of their computerized information, agencies should design and implement controls to prevent, limit, and detect unauthorized access to computing resources, programs, information, and facilities. Inadequate design or implementation of access controls increases the risk of unauthorized disclosure, modification, and destruction of sensitive information and disruption of service. In January 2013, we reported on the Bureau’s implementation of information security controls to protect the confidentiality, integrity, and availability of the information and systems that support its mission. We concluded that the Bureau had a number of weaknesses in controls intended to limit access to its systems and information, as well as those related to managing system configurations and unplanned events. We attributed these weaknesses to the fact that the Bureau had not fully implemented a comprehensive information security program, and made 115 recommendations aimed at addressing these deficiencies. The Bureau expressed broad agreement with the report and said it would work to find the best ways to address our recommendations. As of October 29, 2015, the Bureau had addressed 66 of the 115 recommendations we made in January 2013. Of the remaining open recommendations, we have determined that 30 require additional actions by the Bureau, and for the other 19 we have work under way to evaluate if they have been fully addressed. The Bureau’s progress toward addressing our security recommendations is encouraging. However, more work remains to address the recommendations. A cyber incident recently occurred at the Bureau, and while it appears to have had limited impact, it demonstrates vulnerabilities at the Bureau. Specifically, in July 2015, the Bureau reported that it had been targeted by a cyber attack aimed at gaining access to its Federal Audit Clearinghouse, which contains non-confidential information from state and local governments, nonprofit organizations, and Indian tribes to facilitate oversight of federal grant awards. According to Bureau officials, the breach was limited to this database on a segmented portion of the Bureau’s network that does not touch administrative records or sensitive respondent data protected under Title 13 of the U.S. Code, and the hackers did not obtain the personally identifiable information of census and survey respondents. Given that the Bureau is planning to build or acquire IT systems to collect the public’s personal information for the 2020 Census in ways that it has not for previous censuses (e.g., web-based surveys, cloud computing, and enabling mobile devices to collect census data), continuing to implement our recommendations and apply IT security best practices as it implements CEDCAP systems must be a high priority. As a result of the Bureau’s challenges in key IT internal controls and its looming deadline, we identified CEDCAP as an IT investment in need of attention in our February 2015 High-Risk report. We recently initiated a review of the CEDCAP program for your subcommittees, and expect to issue a report in the spring of 2016. In conclusion, the Bureau is pursuing initiatives to significantly reform its outdated and inefficient methods of conducting decennial censuses. However, with less than 2 years remaining until the Bureau plans to have all systems and processes for the 2020 Census developed and ready for end-to-end testing, it faces challenges that pose significant risk to 2020 Census program. These include the magnitude of the planned changes to the design of the census, the Bureau’s prior track record in executing large-scale IT projects, and the current lack of a permanent Chief Information Officer, among others. Moreover, the Bureau’s preliminary decision deadline has come and gone, and many IT-related decisions have been deferred to 2016 through 2018. Consequently, it is running out of time to develop, acquire, and implement the production systems it will need to deliver the redesign and achieve its projected $5.2 billion in cost savings. The Bureau needs to take action to address the specific challenges we have highlighted in prior reports. If these actions are not taken, cost overruns, schedules delays, and performance shortfalls may diminish the potentially significant cost savings that the Bureau estimates will result from redesigning the census for 2020. Chairmen Meadows and Hurd, Ranking Members Connolly and Kelly, and Members of the Subcommittees, this completes my prepared statement. I would be pleased to respond to any questions that you may have. If you have any questions concerning this statement, please contact Carol Cha, Director, Information Technology Acquisition Management Issues, at (202) 512-4456 or chac@gao.gov. Other individuals who made key contributions include Shannin O’Neill, Assistant Director; Andrew Beggs; Lee McCracken; and Jeanne Sung. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The cost of the nation's decennial census has steadily increased over the past 40 years; the 2010 Census was the most expensive to date, at about $13 billion. To achieve cost savings while still conducting an accurate count of the population, the U.S. Census Bureau is planning significant changes for the design of the 2020 Decennial Census, including major efforts to implement new technologies and IT systems supporting its surveys. For example, the Bureau is planning to offer an option for households to respond via the Internet, which requires developing new applications and IT infrastructure. This statement summarizes the critical challenges the Bureau faces in successfully delivering IT systems in time for testing redesigned 2020 Census operations. To develop this statement, GAO relied on previously published work, as well as information on steps the Bureau has taken to implement prior GAO recommendations. GAO has previously reported that the U.S. Census Bureau (Bureau) faces a number of critical challenges in developing and deploying the information technology (IT) systems and infrastructure it plans to rely on to conduct the significantly redesigned 2020 Census. Specifically, the Bureau has a major IT program under way to modernize and consolidate the multiple, duplicative systems it currently uses to carry out survey data collection and processing functions; however, with less than 2 years before preparations begin for end-to-end testing of all systems and operations to ensure readiness for the 2020 Census, there is limited time to implement it. While the Bureau documented many key decisions about the redesigned 2020 Census in the 2020 Census Operational Plan, released in October 2015, several key IT-related decisions have not been made. Specifically, the Bureau has not yet made decisions about the projected demand that the IT infrastructure would need to meet or whether it will build or buy the needed systems. This lack of prioritization of IT decisions has been a continuing trend, which GAO has previously identified. For example: In April 2014, GAO reported that the Bureau had not prioritized key IT research and testing needed for its design decisions. Accordingly, GAO recommended that the Bureau prioritize its IT-related research and testing projects. The Bureau had taken steps to address this recommendation, such as releasing a plan in September 2014 that identified research questions intended to inform the operational design decisions. In February 2015, however, GAO reported that the Bureau had not determined how key IT research questions that were identified in the September 2014 plan would be answered—such as the expected rate of respondents using its Internet response option or the IT infrastructure that would be needed to support this option. GAO recommended that the Bureau, among other things, develop methodologies and plans for answering key IT-related research questions in time to inform design decisions. However, this has not yet happened. In addition, while the Bureau has made improvements in some key IT management areas, it still faces challenges in the areas of workforce planning and information security. Specifically: It has taken steps to develop an enterprise-wide IT workforce planning process, as GAO recommended in 2012. However, the Bureau has yet to fill key positions. Most concerning, it is currently without a permanent chief information officer. The Bureau has taken steps to implement the majority of the 115 recommendations GAO made in 2013 to address information security weaknesses; however, completing this effort is necessary to ensure that sensitive information it will collect during the census is adequately protected. With the deferral of key IT-related decisions, the Bureau is running out of time to develop, acquire, and implement the systems it will need to deliver the redesign and achieve its projected $5.2 billion in cost savings. In its prior work, GAO made recommendations to the Census Bureau to prioritize IT testing and research and determine how key decisions for the 2020 Census were to be answered. GAO also made recommendations to improve IT management, workforce planning, and information security. The Bureau has taken steps to address selected recommendations, but more actions are still needed to fully address these recommendations.
Each military service—the Army, the Navy, the Air Force, and the Marine Corps—is responsible for assessing and making decisions regarding the ammunition in its inventory. The Army, as the Single Manager for Conventional Ammunition (SMCA), is responsible for centrally managing the demilitarization of all conventional ammunition including non-SMCA- managed items, for which capability, technology, and facilities exist to complete demilitarization and disposal. The services determine if the conventional ammunition in their accounts is unserviceable or above their needs, and if so, transfer the ammunition to installations as specified by the SMCA. However, before proceeding with demilitarization, any serviceable conventional ammunition that is beyond a service’s needs is to be offered to the other services through an annual cross-leveling process. The services are to screen all conventional ammunition inventories that are beyond their needs by the other military services. Once the screening is complete, the service can transfer ammunition to the demilitarization account as DOD excess, except when safety issues require immediate disposal. As shown in figure 1, once it has been determined that the conventional ammunition is unserviceable or DOD excess, the services deliver the ammunition to one of the seven demilitarization depots in the United States and the ammunition is incorporated into the CAD stockpile. Appendix III provides a map of the seven demilitarization depots and an explanation of the demilitarization methods used by the Army. Multiple DOD entities have responsibilities related to managing and overseeing conventional ammunition, with the Army having a prominent role. The Secretary of the Army serves as DOD’s SMCA and is responsible for centrally managing all aspects of the life cycle management of conventional ammunition, from research and development through demilitarization and disposal. The Program Executive Office for Ammunition has been designated the SMCA Executor and is responsible for executing all the functions of the SMCA. The Program Executive Office for Ammunition works with Joint Munitions Command and the Aviation and Missile Command to manage the demilitarization of conventional ammunition at seven Army depots and several commercial firms. The Program Executive Office for Ammunition budgets and funds the demilitarization and disposal of all munitions in the CAD stockpile. In addition, for ammunition, such as Bullpup rockets, that has no demilitarization process, the Program Executive Office for Ammunition plans, programs, budgets, and funds a joint-service research and development program to develop the necessary capability, technology, and facilities to demilitarize the ammunition. Within the Army Materiel Command, Army Aviation and Missile Command is responsible for the demilitarization of missiles and components, and the Joint Munitions Command is responsible for demilitarization of all remaining conventional ammunition. Army Aviation and Missile Command develops and implements the annual missile demilitarization operating plan, and Joint Munitions Command does the same for the CAD stockpile. Furthermore, Joint Munitions Command provides logistics and sustainment support to the Program Executive Office for Ammunition and the Army Aviation and Missile Command. Joint Munitions Command also oversees the storage of the CAD stockpile, maintains the CAD stockpile database, and arranges the transportation of conventional ammunition to the demilitarization site when necessary. The military departments have a process for collecting and sharing data on conventional ammunition through inventory stratification reports that they are required to prepare at least annually. They use these reports to identify inventory owned by one department that may be available to meet the needs of another department, as well as to identify both inventory deficiencies and excesses. DOD Manual 4140.01 Volumes 6 and 10 direct the military departments to assess the ability of the ammunition inventory to meet their needs by stratifying their inventories into various categories and requires them to prepare a report at least annually for internal purposes that lists the current inventory levels of all ammunition. The annual internal report divides the inventory into the categories of requirement-related munitions stock, economic retention munitions stock, contingency retention munitions stock, and potential reutilization and disposal stock. The manual also directs the departments to develop an external report identifying inventory in the same categories for each ammunition listed. The military departments are to use these reports, among other things, to identify opportunities for redistributing ammunition to meet unfilled needs in other military departments. The reports are then distributed to the other military departments to provide visibility. In addition, the Office of the Executive Director for Conventional Ammunition, which facilitates this process, compares the data in the inventory reports with data on planned procurements of ammunition. After the departments share their annual reports on ammunition inventory, including which ammunition could be reutilized; department officials participate in the Quad Services Review and review all the other departments’ stratification reports to identify potential cross-leveling opportunities and request logistics data for items of interest. DOD guidance indicates that this cross-leveling process should be used to offset individual procurements of the military departments in coordination with the Under Secretary of Defense for Acquisition, Technology, and Logistics. For example, the Executive Director for Conventional Ammunition reported in September 2014 that DOD avoids an average of $72 million annually in procurement costs by using the redistribution process regarding each service’s inventory holdings that exceed their needs. During the fiscal year 2014 redistribution process, the services transferred approximately 5 million items among each other, of which approximately 3 million were small-caliber items such as ammunition for rifles or pistols, about 2 million were for larger-caliber weapons such as mortars, and about 383,000 were a mixture of other types of ammunition. According to the Office of the Executive Director for Convention Ammunition’s Fiscal Year 2014 Cross-leveling End of Year Report, the potential acquisition cost avoidance achieved in the 2014 cross-leveling process totaled about $104.2 million. DOD guidance requires that at the end of the annual cross-leveling process, any remaining unclaimed potential reutilization and disposal stock should either be claimed by another military department, recategorized, or designated for disposal, whether through the Defense Logistics Agency Disposition Services Account, or the CAD stockpile, as appropriate. We last reported on DOD’s management of conventional ammunition in March 2014. We found that the Army’s annual stratification report, which shows the status of ammunition inventory at a given point in time, did not include information on all usable ammunition items because it did not include missiles managed by the Army Aviation and Missile Command. Since the Army’s missiles were not included in the annual stratification report, they were not considered during the cross-leveling process. Further, we found that items above the services’ needs in a prior year that had been placed into the CAD stockpile were not considered in the cross- leveling process. We made recommendations to improve data sharing among the services, and DOD concurred with all of these recommendations. Among our recommendations was to include missiles managed by the Army Aviation and Missile Command in the annual stratification report, and DOD stated that starting with the March 2014 annual stratification meeting the Army would provide missile information for the cross-leveling process. As a result, 100 Javelin missiles were identified for transfer from the Army to the Marine Corps in fiscal year 2015, potentially saving the Marine Corps an estimated $3 million. Further, we recommended the Army include information on ammunition that in a previous year was unclaimed by another service and had been categorized for disposal. In response, DOD officials stated that all of the military services have visibility into the Army system that tracks ammunition categorized for disposal and they would direct the military services to consider such ammunition in the cross-leveling process. In 2015, the Navy and the Air Force identified materiel worth about $488,000 in the CAD stockpile from prior years they could use. The services maintain information on their conventional ammunition; however, some inventory records for ammunition in the CAD stockpile have incorrect or incomplete information on its condition and weight. As discussed earlier, each service has its own inventory information system to maintain its conventional ammunition inventory, which includes any unserviceable ammunition or ammunition above its needs in its custody. Consolidated information from the military services on the ammunition in the CAD stockpile is maintained in the Army’s Logistics Modernization Program (LMP). DOD Instruction 5160.68 directs the services to provide the SMCA with data on ammunition transferred for demilitarization and disposal operations. LMP has information on the location and quantity of all items in the CAD stockpile, but some records have incomplete or incorrect data on condition and weight. Further, according to DOD officials, each item has a condition code assigned to it by the service when placed into the CAD stockpile, and the condition code is not updated while the item is in the Service officials stated that when they are considering pulling stockpile.an item from the stockpile to fill a current need, they generally inspect the condition of the item to determine whether the condition code of the item is still accurate and the item is useable. At times, the services have found particular items with a condition code indicating the materiel was serviceable, but the item’s shelf life had expired, while other ammunition had performance issues that made it unacceptable. Further, we found that DOD does not have the weight data for a number of items in the CAD stockpile. Federal Government state that an entity should have controls to ensure that all transactions are complete and accurately recorded. In our review of data in the LMP database from 2012 to February 2015 the number of records without assigned weight increased from 2,223 (out of 34,511 records) to 2,829 (out of 36,355 records), which shows the problem is growing. Although some of the records that are missing weight data have very few items in storage, there are several items with significant quantities, such as 3.8 million of chaff countermeasures, 125,000 of 75 millimeter projectiles, and 109,000 of 155 millimeter ammunition. LMP lists the gross weight of an individual item (shell, missile, or cartridge), however, officials involved in the demilitarization of conventional ammunition use pro-weight, which includes the weight of the item plus its packaging. Pro-weight is used because the demilitarization process has to recycle, or otherwise dispose of all packaging material and containers in addition to destroying the ammunition. The CAD stockpile weights are described in short tons, which is equal to 2,000 lbs. DOD uses weight data as a metric in managing the demilitarization of conventional ammunition. More specifically, SMCA officials use weight for (1) developing cost estimates for demilitarization projects; (2) determining what conventional ammunition should be demilitarized; (3) reporting the size of the CAD stockpile to the military services, the Office of the Secretary of Defense, and Congress; (4) forecasting the amount of conventional ammunition to be transferred into the CAD stockpile in the future; and (5) reporting on what ammunition has been demilitarized. The absence of weight data for some of the items in the CAD stockpile understates the size and composition of the CAD stockpile, thereby affecting DOD’s estimations of its demilitarization needs. According to DOD officials, the reasons for the missing data in the CAD stockpile are related to the types of items transferred into the stockpile, such as older ammunition stocks that do not have complete weight data, nonstandard ammunition, foreign ammunition used for testing, components removed from larger weapons, and ammunition with records that migrated from legacy data systems. DOD officials stated they are trying to correct current records with missing or inaccurate data, particularly weight. In some cases, such as older stocks, the only solution is to locate and physically weigh the ammunition item(s). DOD officials have not weighed the items because they said it would be costly and labor intensive. However, since the items without weight data are not factored into DOD’s demilitarization determination, DOD is not positioned to optimally demilitarize the most ammunition possible with the given resources available. Further, as discussed above, the number of records without weight data has increased over the years, which indicates that SMCA continues to accept materiel into the CAD stockpile without weight data. Officials from all the military services said they have access to LMP and they have used LMP to search the CAD stockpile for materiel they could use, but information on DOD excess is not widely shared with other government agencies such as the Department of Homeland Security, which also uses ammunition for purposes such as training exercises. Specifically, the military services have achieved benefits such as cost avoidances from access to the information in LMP on the CAD stockpile. For example, an Air Force need for 280,000 rounds of 40 millimeter ammunition was met by the remanufacture of Navy 40 millimeter shells drawn from the CAD stockpile, which according to Joint Munitions Command officials saved an estimated $30 million. Also, the Marine Corps identified the need for signal flares at security check points in Iraq and Afghanistan, so they pulled 95,594 flares out of the CAD stockpile, which according to Marine Corps officials saved the service an estimated $3.8 million. When the services have been able to fulfill needs by drawing ammunition from the CAD stockpile, financial benefits have arisen both in reduced demilitarization costs over time and reduced new procurements. DOD also has reduced its demilitarization costs by transferring some excess ammunition to other government agencies as opposed to demilitarizing the ammunition, but has made such transfers only on a limited basis. For example, in fiscal year 2014 DOD provided 38 million rounds of small arms ammunition to the Federal Bureau of Investigation and 7.5 million rounds of small arms ammunition to the U.S. Marshals Service. Officials stated that the Joint Munitions Command and Army Deputy Chief of Staff for Logistics (G-4) used informal methods to communicate with other government agencies on available excess ammunition. Recognizing that there are benefits to such transfers, the Office of the Executive Director for Conventional Ammunition, in its Fiscal Year 2014 Cross-Leveling End of Year Report, included remarks indicating efforts should be made to include other government agencies in the cross-leveling process. Communicating with other government agencies on available excess ammunition could help reduce the CAD stockpile. Section 346 of Ike Skelton National Defense Authorization Act, as amended, requires, among other things, that serviceable small arms ammunition and ammunition components in excess of military needs not be demilitarized, destroyed, or disposed of unless in excess of commercial demands or certified as unserviceable or unsafe by the Secretary of Defense. Before offering the excess serviceable small arms ammunition for commercial sale, however, this provision outlines a preference that DOD offer the small arms ammunition and ammunition components for purchase or transfer to other Federal government agencies and departments, or for sale to state and local law enforcement, firefighting, homeland security, and emergency management agencies as permitted by law. According to officials, DOD does not have a formal process for offering the excess small arms ammunition and components to other government agencies. This manual references 10 U.S.C. § 2576a, under which DOD is permitted to transfer (sell or donate) ammunition to federal or state agencies where the Secretary of Defense determines that the ammunition is “(A) suitable for use by the agencies in law enforcement activities, including counter-drug and counter-terrorism activities; and (B) excess to the needs of the Department of Defense.” The ammunition must also be part of the existing stock of DOD, accepted by the recipient agency on an as-is, where-is basis, transferred without the expenditure of any funds available to DOD for the procurement of defense equipment, and transferred such that all costs incurred subsequent to the transfer of the property are borne or reimbursed by the recipient agency. Finally, there is a stated preference for those applications indicating that the transferred property will be used in counter-drug or counter-terrorism activities of the recipient agency. training and qualification requirements. However, due to budget constraints the Department of Homeland Security reduced the number of training classes. If DOD guidance outlining a systematic process to share information on excess ammunition had been in place, the Department of Homeland Security could have possibly been aware of and obtained selected ammunition needed for training classes. Standards for Internal Control in the Federal Government states that management should ensure there are adequate means of communicating with, and obtaining information from, external stakeholders that may have a significant impact on the agency achieving its goals. Transfers of ammunition to other government agencies, subject to certain requirements, could support DOD’s goal of reducing its CAD stockpile in a manner consistent with section 346 of the Ike Skelton National Defense Authorization Act for Fiscal Year 2011 as amended. Without establishing a systematic means to communicate with and provide other government agencies with information on available excess serviceable ammunition, government agencies could be spending their funds to procure ammunition that DOD has awaiting demilitarization and could provide to them. In addition, without such a means, DOD could miss opportunities to reduce its overall demilitarization and maintenance costs by transferring such ammunition to other government agencies. DOD has identified a number of challenges in managing the demilitarization of conventional ammunition, and has taken actions to address them. These challenges include compliance with environmental regulations; treaties regarding certain types of ammunition; services’ forecasts of excess, obsolete, and unserviceable ammunition; and annual funding. DOD officials stated they have identified the following challenges and are taking actions to address these challenges: Environmental Regulation Compliance: SMCA officials stated they must follow environmental laws in demilitarizing conventional ammunition and their compliance is governed by environmental permits that cover the design and operation of facilities that deal with waste management, noise, air, water, and land emissions. Many munitions are harmful to human health and the environment, and demilitarizing large quantities of ammunition requires the rigorous control and processing of toxic substances. Some of the demilitarization processes generate additional environmental hazards, such as air pollutants and waste water. Figure 2 shows the release of air pollutants from the open burning of munitions. Other demilitarization processes, such as open detonation, generate noise pollution affecting the local community. According to SMCA officials, open burn and open detonation are the primary and cheapest methods to demilitarize conventional ammunition; further, some munitions can only be demilitarized by this process. All seven depots that demilitarize conventional ammunition have the capability to demilitarize ammunition through open burn/open detonation. However, officials stated there are environmental concerns with open burn/open detonation that may force DOD to use alternate and more costly methods of disposal, like closed disposal technology, in the future. For example, officials at one demilitarization facility noted that they generally operated their open detonation demolition ranges at less than 50 percent of capacity (weight of explosive charge) due to air and noise pollution concerns. According to DOD officials, DOD works to ensure compliance with various environmental regulations by applying for and maintaining permits issued by federal and state agencies that regulate its demilitarization operations. Officials indicated that these permits are granted by federal and state agencies and specify which pollutants can be released and in what quantities, as well as describe in detail how each process controls pollutants and meets applicable standards. If environmental regulations change, DOD officials indicated they may need to renew their permits; if the permits are revised, DOD may be required to fund capital investments in equipment and processes to conform to the requirements of any new permits. SMCA officials stated they address these challenges by including in each annual demilitarization plan sufficient work for each depot to exercise existing environmental permits so the permits do not lapse. Also, they recycle or remanufacture, when possible, materiel that otherwise would be destroyed. Finally, the officials indicated that they contract with private companies to conduct some of the demilitarization work as well. Treaty Compliance: The U.S. government is considering two treaties that, if ratified, would significantly impact U.S. demilitarization operations. One treaty is the Convention on Cluster Munitions and the other is the Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on Their Destruction. DOD has an inventory of 471,726 tons of cluster munitions and 23,436 tons of anti-personnel landmines that will have to be disposed of if the United States ratifies the two treaties. Specifically, the conventions require the destruction of the respective cluster munitions and landmines inventories, and to comply, DOD officials stated that they would be forced to prioritize disposal of these weapons without concern to maximizing the reduction of the CAD stockpile. Service Forecasts: SMCA officials said that DOD’s demilitarization budget request frequently does not match actual funding needs. The request is based upon the estimated disposal costs required to reduce the existing CAD stockpile, as well as costs for disposing of ammunition the services forecast they will submit for disposal. Each of the services is required to submit a 5-year forecast on the amount of ammunition they expect to turn in for demilitarization each year. However, program officials indicate the services’ forecasts are generally inaccurate, which can make demilitarization planning challenging. In 2010, the Army Audit Agency found that Army officials had significantly understated the forecasted annual additions the services would transfer to the CAD stockpile from 2005 to March 2009, and these estimates were based on the projections furnished by the services. The Army Audit Agency recommended the Joint Conventional Ammunition Policies and Procedures 7 (Demilitarization and Disposal) be revised to help the military services develop better forecasts for additions to the stockpile. In their 2013 follow-up report, the Army Audit Agency found that the Joint Conventional Ammunition Policies and Procedures 7 (Demilitarization and Disposal) had been revised in 2011; however, the forecast additions for fiscal year 2012 were still inaccurate. SMCA officials told us that they still received inaccurate forecast information from the services. The SMCA officials stated they have no control over the ammunition the services actually transfer year to year, and they accept all excess, obsolete, and unserviceable conventional ammunition into the CAD stockpile, even if it exceeds forecasts. DOD officials stated they do not have options to address any problems caused by unplanned additions to the CAD stockpile, although DOD recalculates the demilitarization plan to include the additional ammunition when appropriate. Annual Funding: SMCA officials stated that the Army requests less funding than needed to meet its critical requirement each year, which could result in the CAD stockpile growing if the amount of ammunition demilitarized is less than the amount of ammunition transferred from the services during the year. The critical requirement is the funding necessary to demilitarize 3 percent of the existing stockpile and the full amount of ammunition the services plan to add to the CAD stockpile during the year. In December 2013, Army Audit Agency reported the Army Deputy Chief of Staff for Logistics (G-4) estimated the critical funding level for the demilitarization of conventional ammunition at approximately $185 million. Further, the report stated that the conventional ammunition demilitarization program is considered a lower priority when compared to other needs. The Department of the Army’s budget request for conventional ammunitions demilitarization in fiscal year 2015 was $114 million and for fiscal year 2016 it was $113 million. Officials stated that the amount of funding has caused them to be reluctant to initiate projects that increase demilitarization capacity or efficiency, since these capabilities may not be utilized in the future due to funding shortfalls. Furthermore, officials state they lack Research, Development, Test, and Evaluation (RDT&E) funding to develop demilitarization processes for the disposal of some materiel in the CAD stockpile that cannot be demilitarized using current processes, but they expect these funds will be increased in fiscal year 2017. SMCA addresses the funding challenge each year by developing an annual demilitarization plan to dispose of as much of the CAD stockpile as it can based on the amount of funding they receive. DOD officials have estimated the average cost to store, maintain, and dispose of excess, obsolete, and unserviceable conventional ammunition. DOD officials stated that in fiscal year 2015, it costs on average about $42 per ton to store conventional ammunition. This number was determined using the estimated cost to perform annual inventory counts, surveillance inspections of ammunition, and housekeeping movement of stocks to manage the storage space. Additionally, DOD officials stated that in fiscal year 2015, it costs on average about $2,000 per ton to demilitarize conventional ammunition. This cost is driven by the quantities and the complexity of the items being demilitarized. DOD has not conducted a formal analysis comparing the costs of storing excess, obsolete, and unserviceable conventional ammunition with the costs of its demilitarization and disposal. Based on our review of key DOD conventional ammunition demilitarization guidance, there is no requirement to conduct a cost comparison. DOD officials told us that since there is a large difference in the cost to store and the cost to demilitarize ammunition based on their estimates, they believe there is no need to conduct a formal analysis. Further, DOD officials stated their mission is to demilitarize all conventional ammunition in the CAD stockpile and the annual decisions on what to demilitarize are based on achieving that goal. For information on how SMCA officials determine what conventional ammunition to demilitarize, see appendix IV. Efficient management of the CAD stockpile and DOD’s demilitarization effort is important to ensure that as much hazardous material is disposed of as possible using the resources available. In order to meet its goals, the department needs accurate data, which requires complete and accurate documentation of the items transferred into the stockpile each year by the services, as well as ammunition already in the stockpile. Standards for Internal Control in the Federal Government state that an entity should have controls to ensure that all transactions are complete and accurately recorded. DOD does maintain data on conventional ammunition in the stockpile and uses it to manage demilitarization efforts, but officials have not fully maintained accurate and complete weight data on some ammunition items, which factors into their decision making about what to demilitarize in a given year. Without complete and accurate data, DOD is not well positioned to make the best demilitarization decisions and to use demilitarization resources as efficiently as possible. Furthermore, efficient management of the CAD stockpile is not solely a matter of demilitarization, since some materiel in it potentially could be transferred to other agencies, in keeping with DOD regulations and statutory requirements. Such transfers could allow DOD to reduce demilitarization costs and the size of the CAD stockpile while also reducing the need for other government agencies to procure new stocks of ammunition. While at times transfers have led to cost savings, there has not been a formal means to regularly communicate with external stakeholders about the availability of excess ammunition in the stockpile, which is necessary to meet DOD’s goals. Without a systematic means to communicate information on excess ammunition to other government agencies, DOD will miss opportunities to reduce the CAD stockpile and demilitarization costs through transfers. To improve the efficiency of DOD’s conventional demilitarization efforts, including systematically collecting and maintaining key information about the items in its CAD stockpile and sharing information on excess items with other government agencies, we recommend that the Secretary of Defense direct the Secretary of the Army to take the following two actions. To improve the completeness and accuracy of information on the weight of items in the CAD stockpile—the key measure used by DOD to manage the conventional ammunition demilitarization operation— establish a plan to (1) identify and record, to the extent possible, the missing or inaccurate weight information for existing ammunition records in the CAD stockpile and (2) ensure that all items transferred to the CAD stockpile, including for example components removed from larger weapons and nonstandard ammunition, have the appropriate weight data. To improve the visibility and awareness of serviceable excess ammunition in the CAD stockpile that could potentially be transferred to other government agencies, develop a systematic means to make information available to other government agencies on excess ammunition that could be used to meet their needs. We provided a draft of this report to DOD for review and comment. In its written comments, reproduced in appendix V, DOD concurred with both of the recommendations. DOD also provided technical comments on the draft report, which we incorporated as appropriate. DOD concurred with our first recommendation that the Secretary of the Army establish a plan to (1) identify and record, to the extent possible, the missing or inaccurate weight information for existing ammunition records in the CAD stockpile and (2) ensure that all items transferred to the CAD stockpile, including for example components removed from larger weapons and nonstandard ammunition, have the appropriate weight data. DOD stated that Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics would ensure that the Secretary of the Army is tasked to identify and record, to the extent practicable, weight data for the existing CAD stockpile and for items transferred to the CAD stockpile in the future. In response to our second recommendation that the Secretary of the Army develop a systematic means to make information available to other government agencies on excess ammunition that could be used to meet their needs, DOD stated that Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics would ensure that the Secretary of the Army is tasked to develop a systematic means to make information available to other government agencies on excess ammunition. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; and the Commandant of the Marine Corps. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-5257 or merrittz@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix VI. The Department of Defense (DOD) has policies and procedures that help govern the demilitarization of excess, obsolete, and unserviceable conventional ammunition and DOD officials involved in the demilitarization of conventional ammunition stated they believe the policies and guidance issued are effective to govern demilitarization. Additionally, depots have used the policies and guidance to develop their own implementing guidance and standard operating procedures for use at their locations. For example, Tooele Army Depot developed a letter of instruction for the inspection and disposal of inert material and Crane Army Ammunition Plant has developed several standard operating procedures to govern the base’s demilitarization processes. The table below provides an overview of key DOD policies on demilitarization. DOD Instruction 5025.01, DOD Issuances Program, establishes guidance for directives, instructions, manuals, and charters such as frequency of updates, length, purpose, and appropriate approval level. The guidance documents we reviewed in the table above conform to the requirements under DOD Instruction 5025.01: DOD Instruction 5025.01 provides that directives, instructions, and manuals (issuances) published before March 25, 2012 should be updated or cancelled within 10 years of their publication date, and that those published or changed after that date will be processed for cancellation by the Directives Division on the 10-year anniversary of their original publication dates, unless an extension is approved. That said, even for those issuances not required to be cancelled within 10 years, an issuance is not considered current when it is not within 10 years of its publication date. The directives, instructions and manuals we reviewed in the table above conformed to this requirement. For example, DOD Directive 5160.65, Single Manager for Conventional Ammunition (SMCA) was published on August 1, 2008. Therefore, it is not required to be updated or cancelled until August 2018. DOD Instruction 5025.01 provides that directives should not be more than 10 pages in length (including enclosures, with no procedures, and with the exception of charters); instructions not more than 50 pages (including enclosures) or they should be divided into volumes; and manuals should be divided into two or more volumes if more than 100 pages are required. The directives, instructions, and manuals we reviewed in the table above were within the established parameters. For example, DOD Instruction 5160.68 is 21 pages, which is within the required maximum limit of 50 pages for instructions not divided into multiple volumes. DOD Instruction 5025.01 requires that DOD directives exclusively establish policy, assign responsibility, and delegate authority to the DOD Components. Directives will not contain procedures. DOD instructions either implement policy or establish policy and assign responsibilities, and may provide general procedures for carrying out or implementing those policies. DOD manuals provide detailed procedures for implementing policy established in instructions and directives. The directives, instructions, and manuals we reviewed in the table above established and implemented policy as required. For example, DOD Instruction 5160.68 assigns responsibilities and mission functions for conventional ammunition management to the Secretary of the Army, the military services, and USSOCOM. DOD Instruction 5025.01 states that, generally, directives are to be signed by the Secretary of Defense or Deputy Secretary of Defense. Depending on the nature of the instruction, instructions must be signed by the component head in the Office of the Secretary of Defense, his or her Principal Deputy, or Office of the Secretary of Defense Presidentially-appointed, Senate-confirmed official. Manuals must be signed by an individual in one of these positions, as authorized by their chartering directives. The directives, instructions, and manuals we reviewed in the table above were signed by the appropriate officials. For example, DOD Directive 5160.65 was appropriately signed by the Deputy Secretary of Defense. DOD Instruction 5025.01 states that charters must define the scope of functional responsibility and identify all delegated authorities for the chartered organization. The SMCA charter defines responsibility and authorities, for example, by delegating to the Deputy Commanding General for Army Materiel Command the role of Executive Director for Conventional Ammunition and provides authorities as needed to execute the SMCA mission. To assess the extent to which the Department of Defense (DOD) has adequately maintained and shared information on the quantity, value, condition, and location of excess, obsolete, and unserviceable conventional ammunition for each military service, we reviewed DOD’s inventory data on excess, obsolete, and unserviceable conventional ammunition held in the conventional ammunition awaiting demilitarization and disposal (CAD) stockpile as of February 2015 to determine how complete and accurate the data are. The scope of the audit was limited to the materiel in the CAD stockpile and ammunition in the services’ inventory that was unserviceable or in excess of the services’ needs. We interviewed Army, Navy, Marine Corps, and Air Force officials to determine how they manage unserviceable ammunition and serviceable ammunition that is beyond the services’ needs. We also determined the extent to which the information in the services’ ammunition inventory systems is useful for their purposes. We interviewed Single Manager for Conventional Ammunition (SMCA) and service officials to learn how information on excess, obsolete, and unserviceable ammunition is shared. After initial discussions with DOD officials, we determined that the department does not consider the value of ammunition in the management of its CAD stockpile so we did not review the value of the conventional ammunition. Further, we conducted a data reliability assessment of the Air Force Combat Ammunition System, the Navy’s Ordnance Information System, the Marine Corp’s Ordnance Information System – Marine Corps, and the Army’s Logistics Modernization Program by reviewing the services’ responses to questionnaires on the internal controls they use to manage their systems. We applied Standards for Internal Control in Federal Government as our criteria, and found that the data was sufficiently reliable for determining whether DOD adequately maintained information on the quantity, value, condition, and location of excess, obsolete, and unserviceable conventional ammunition in its accounts and for our reporting purposes. The questions we sent the services solicited information on the controls they had implemented in their ammunition information systems. The questions seek to determine if there were controls that restricted access to the information system to prevent unauthorized access or inappropriate use and that there were data quality controls that ensured completeness, accuracy, authorization, and validity of all transactions. We interviewed service officials in the Army, Navy, Air Force, and Marine Corps to learn how ammunition is managed once the decision is made to demilitarize and transfer it to the CAD stockpile. We also interviewed officials on the visibility, accessibility, accuracy, and usefulness of the data on the CAD stockpile and determine if they have identified problems regarding the reliability of the data. Lastly, we reviewed policies and legislation to determine what guidance was provided on communicating excess conventional ammunition to other government agencies, and we interviewed SMCA officials about the extent to which they communicate the availability of excess ammunition to other government agencies and the challenges involved with making conventional ammunition available to government entities outside of DOD. To examine challenges, if any, DOD has identified in managing the current and anticipated CAD stockpile, and if so, actions taken to address those challenges, we reviewed DOD reports on the management of the current CAD stockpile to identify any problem areas and DOD’s plans to address these problems. We visited McAlester Army Ammunition Plant and examined the management of its ammunition demilitarization operation to include storage practices and a variety of methods to destroy the ammunition. We selected McAlester Army Ammunition Plant to visit because a large portion of the CAD stockpile was stored there, and it used several methods to demilitarize ammunition. We also contacted the other six depots that store and demilitarize ammunition and requested the same information on the management of their respective ammunition demilitarization operations.in the Army, Navy, Air Force, and Marine Corps to identify challenges they face in managing the stockpile and discuss the actions they have taken to address the challenges. We interviewed SMCA officials and officials To describe DOD’s average costs of storing and maintaining items in the CAD stockpile and the average costs of the disposal of items in the stockpile, we obtained fiscal year 2015 cost estimates for storing and demilitarizing ammunition from the Army Materiel Command’s Joint Munitions Command, and interviewed officials about what factors were used to develop these cost estimates. We also reviewed a 2013 DOD report on the cost of demilitarizing conventional ammunition to determine the factors that drive demilitarization costs. Additionally, we interviewed Army officials on the process they use to make demilitarization decisions. To describe DOD’s policies and procedures governing the demilitarization of excess, obsolete, and unserviceable conventional ammunition and discuss the extent to which they are consistent with DOD guidance for developing policies and procedures, we obtained policies, procedures, and guidance on demilitarization. We determined that these policies, procedures, and guidance would be considered adequate if they conformed to DOD guidance on directives and instructions. Therefore, we compared the requirements in DOD Instruction 5025.01, with the guidance governing demilitarization of conventional ammunition and determined whether DOD followed this instruction on how guidance documents should be developed and how often they should be updated.To determine the extent to which DOD policies and procedures on demilitarization of conventional ammunition are effective, we interviewed officials in the Army and contacted the demilitarization depots to obtain their opinions on the effectiveness and usefulness of DOD policies and procedures governing the demilitarization of conventional ammunition. We visited or contacted the following offices during our review. Unless otherwise specified, these organizations are located in or near Washington, D.C. Office of the Under Secretary of Defense for Acquisition, Technology U.S. Special Operations Command, Tampa, Florida Defense Logistics Agency Program Executive Office for Ammunition, Dover, New Jersey Office of the Executive Director for Conventional Ammunition, Dover, Office of the Assistant Secretary of the Army (Acquisition, Logistics and Technology) Headquarters, Department of the Army, Army Deputy Chief of Staff for Logistics (G-4) U.S. Army Materiel Command, Huntsville, Alabama U. S. Army Joint Munitions Command, Rock Island, Illinois U.S. Army Aviation and Missile Command, Huntsville, Alabama McAlester Army Ammunition Plant, McAlester, Oklahoma Office of the Chief of Naval Operations, Director for Material Readiness & Logistics (N4) Naval Supply Systems Command, Mechanicsburg, Pennsylvania U. S. Marine Corps Headquarters U.S. Marine Corps Systems Command, Quantico, Virginia U.S. Air Force Headquarters U.S. Air Force Life Cycle Management Center Readiness, Ogden, We conducted this performance audit from August 2014 to July 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Army has seven demilitarization locations that store 98 percent of the conventional ammunition awaiting demilitarization and disposal (CAD) Stockpile. Figure 3 below shows these seven demilitarization locations, the amount of the CAD stockpile at those locations, and the demilitarization capabilities at each location. 1. Autoclave - Autoclave capability removes and reclaims main charge cast explosives (such as TNT) from projectiles and bombs. Munitions are prepared for the autoclave by disassembly or cutting to expose the main explosive charge. They are placed in the autoclave and the vessel is heated using steam. As the munitions body heats up, the explosive melts and flows to the bottom of the autoclave for collection in heated kettles. 2. Hot Water Washout - Washout capability removes and reclaims main cast explosive charges from projectiles, bombs, and mines. Munitions are prepared for washout by disassembly to expose the main explosive charge. Munitions are placed over a washout tank where low-pressure hot water is injected into the cavity to wash out the explosives into a recovery tank. 3. Cryofracture - Cryofracture involves the cooling of the munitions in a liquid nitrogen bath, followed by fracture of the embrittled item(s) in a hydraulic press and the subsequent thermal treatment of the fractured munitions debris in order to destroy the explosives and decontaminate any residual metal parts. 4. Hydrolysis – Hydrolysis uses a sodium hydroxide solution to dissolve the aluminum casing and expose the energetic materials contained within. The sodium hydroxide solution then reacts with the energetic materials, breaking them down and rendering them inert. 5. Improved Munitions Convention Download – Joint Munitions Command officials describe this as a process developed to demilitarize artillery projectiles that contain submunitions, which are small bombs. The base plate of the projectile is removed to access the submunitions and they are removed for disposition on the open detonation range. The metal parts including the projectile body and base plate are often reused in the manufacture of new rounds. 6. Incineration - Incineration provides an environmentally acceptable means to destroy munitions not suitable for other demilitarization methods and reclaim the metal scrap for sale. Small munitions and/or components are fed on conveyor(s) into the incinerator where they burn or detonate. Metal residues are discharged and collected for salvage. 7. INERT – According to Joint Munitions Command officials, INERT is the shredding, cutting, or mutilation of munitions items, components, or packaging that do not contain energetic materials. 8. Open Burn/Open Detonation - Open burn and open detonation are the intentional combustion or detonation of explosives or munitions, without control or containment of the reaction, and are the preferred method for cost-effective demilitarization of many items. Open burn and open detonation techniques were the primary means used to demilitarize munitions for several decades. 9. Slurry Emissions Manufacturing Facility – According to Joint Munitions Command officials this facility combines energetic material recovered from munitions items with other commercial ingredients to produce blasting charges the mining industry uses. 10. Steamout - Steamout is similar to hot water washout in that both processes essentially melt out energetic fillers in large-caliber projectiles, bombs, and other munitions. With the steamout process, items are placed on an inclined cradle, and steam is jetted in to melt out the fill. The molten slurry is collected and sent to corrugate cooling pans. The pans are held in a vented and heated hood until the water is all evaporated and the explosive solidifies. The solidified explosive is broken into chunks, boxed, and then according to Joint Munitions Command officials, used as donor material for open detonation projects. 11. White Phosphorus Plant - The White Phosphorus-Phosphoric Acid Conversion Plant provides an environmentally acceptable means to demilitarize munitions containing white phosphorus by converting it into phosphoric acid. The munitions are punched to expose the white phosphorus and quickly burned. Smoke from the burning munitions is pulled through a closed loop ducting system into the wet scrubber in the acid plant system for conversion to phosphoric acid. The phosphoric acid is collected and packaged for sale. Metal parts are discharged and collected for salvage. To determine what conventional ammunition should be demilitarized, Joint Munitions Command officials stated they use a database tool called the Demilitarization Optimizer. To develop an annual demilitarization plan, the optimizer produces an initial list of projects, in tons, that will result in demilitarizing the most ammunition possible based on factors entered into the optimizer. Officials stated they use the optimizer as a starting point in developing the annual demilitarization plan; they make adjustments to the optimizer output to maintain demilitarization capability at the depots and to balance the work load over the years. For the demilitarization of missiles, Army Aviation and Missile Command officials stated they do not use the optimizer because they prepare their plan using the number of missiles; however, they consider many of the same factors in determining what missiles to demilitarize in a given year. The optimizer is a database tool used to determine the ammunition, with the exception of missiles, that will be demilitarized, given certain parameters (e.g., inventory, depot capability and capacity, funding, transportation costs, and any mandatory workload requirements). The optimizer has been used by Joint Munitions Command since 1999 as a tool to assist in demilitarization program planning, provide justification to answer questions received from Congress as well as Army headquarters, and provide the most economic allocation of resources among the government depots. Further, the optimizer provides Joint Munitions Command with an auditable trail of decision making and an ability to provide a quick response to “what if” questions. The optimizer database uses several data points to determine what items should be demilitarized: 1. Demilitarization inventory and forecasted additions to the CAD stockpile – the amount of ammunition currently in the CAD stockpile and the estimated amount of ammunition that the services determine they will add to the stockpile that year. 2. Depot capability, capacity, and costs of carrying out demilitarization – depot capability is the type of demilitarization work the depot has the ability to conduct. For example, most of the depots have the capability to conduct demilitarization through open burn and open detonation. Depot capacity is the amount of work that the depot has the ability to conduct by demilitarization capability. For example, Letterkenny Munitions Center has the capacity to demilitarize 3,500 tons of ammunition each year by open burn and 1,250 tons by open detonation. The cost of carrying out demilitarization is an estimate, prepared by the depot, of the cost to demilitarizing specific ammunition using a particular demilitarization capability. Data on storage costs is not entered into the optimizer for cost calculations. 3. Funding – the amount of funding available for demilitarization based on the current fiscal year budget allocation. 4. Packing, crating, handling, and transportation – the cost of moving ammunition to the appropriate demilitarization location. 5. Mandatory workloads – any directives or management initiatives that would change the priority of demilitarization work that must be conducted. For example, if the United States signed the Convention on Cluster Munitions, DOD would be required to demilitarize all cluster munitions within 8 years. This information would be entered into the optimizer to ensure the treaty requirement would be met. Joint Munitions Command officials cautioned that there are some inherent uncertainties in the optimizer process that affect the outcome. One of the uncertainties is the incoming workload. While Joint Munitions Command has an estimate of how much inventory will be generated each year for demilitarization, the estimates are not perfect and leave uncertainty in the quantity of items that will be turned over for demilitarization and the time at which those items will enter the CAD stockpile. Joint Munitions Command officials stated that the optimizer provides a good starting point for decision-making, based on the specific parameters described above, but they do not assign demilitarization projects based solely on the optimizer output. Officials stated that the optimizer produces a list of projects, based on tons, that would be most economical to demilitarize for that given year. However, adjustments are made to balance complex, expensive demilitarization projects with simple, inexpensive demilitarization projects. Since the optimizer attempts to maximize the amount of conventional ammunition demilitarized, it tends to recommend a number of inexpensive projects. This results in pushing the expensive demilitarization projects into the future, which may increase future demilitarization costs. Therefore, to maintain a balance between future demilitarization funding needs and the current funding provided for demilitarization, officials replace some of the inexpensive projects the optimizer recommends with expensive projects. Additionally, officials make adjustments to the optimizer results to ensure each depot is provided sufficient work to maintain demilitarization capabilities. Officials are concerned that if they do not provide some work to each of the depots, the depots would lose their demilitarization capability because some processes require specialized skills or training and retaining those personnel would be impossible if demilitarization was curtailed for a significant amount of time. The loss of trained personnel would create a significant deficit in training and delay the restart of any future demilitarization operations. Further, officials are concerned they risk losing their environmental permits if demilitarization operations were stopped at an installation for a significant amount of time. For fiscal year 2015, the Joint Munitions Command and Program Executive Office for Ammunition officials stated they planned a demilitarization program of about $71 million, which would destroy about 67,640 tons of ammunition. Demilitarization officials at the Aviation and Missile Command stated they use similar factors in determining what missiles to demilitarize, including the location of the missiles, the capabilities and capacity of the depots, the estimated cost to demilitarize, and the funding available. Officials stated the Aviation and Missile Command does not use the optimizer tool, but instead the demilitarization officials coordinate with Product Manager Demilitarization to develop an annual missile demilitarization execution plan. In addition to the factors listed above, officials also consider the safety inspections that have been conducted on each missile and push any potentially unsafe-to-store items to the top of the demilitarization list. While Aviation and Missile Command demilitarization officials do not currently use an optimizer tool, they stated that they are considering whether an optimizer database would be feasible for use with missiles. For fiscal year 2015, the Aviation and Missile Command and Program Executive Office Ammunition officials stated they planned a demilitarization program of about $43 million, which would destroy about 141,598 missiles and components. In addition to the contact named above, Carleen Bennett (Assistant Director), George Bustamante, Lindsey Cross, Chaneé Gaskin, Kevin Keith, Carol Petersen, Michael Silver, Amie Steele, Alexander Welsh, Erik Wilkins-McKee, and Michael Willems made key contributions to this report.
DOD manages conventional ammunition that ranges from small arms cartridges to rockets, mortars, artillery shells, and tactical missiles. When a military service determines such ammunition is beyond its needs, obsolete, or unserviceable, it is offered to the other services and if not taken, transferred to the Army, which manages the CAD stockpile and takes actions to demilitarize and dispose of the ammunition in the stockpile. According to data provided by DOD officials, as of February 2015, the stockpile was about 529,373 tons. DOD estimates that from fiscal year 2016 to fiscal year 2020 it will add an additional 582,789 tons of conventional ammunition to this CAD stockpile. Section 352 of the National Defense Authorization Act for Fiscal Year 2015 included a provision that GAO review and report on the management of DOD's CAD stockpile. This report assesses, among other things, the extent to which DOD has adequately maintained and shared information on excess, obsolete, and unserviceable ammunition for the military services. GAO reviewed applicable guidance and the military service ammunition databases; visited an Army depot that conducts ammunition demilitarization; and interviewed appropriate DOD officials. The Department of Defense (DOD) maintains information on its excess, obsolete, and unserviceable conventional ammunition for the military services and shares this information on a limited basis with other government agencies, but its management of its conventional ammunition awaiting demilitarization and disposal (CAD) stockpile can be strengthened in two areas. The Army uses its Logistics Modernization Program database to maintain consolidated information on ammunition in the CAD stockpile, but GAO found that records for some items do not include complete data on weight. Specifically, of 36,355 records in the database, 2,829 did not have assigned weights as of February 2015. Internal control standards state that an entity should have controls to ensure that all transactions are complete and accurately recorded. DOD officials stated they are trying to correct current records with missing data; however, the number of records without weight data has increased. For example, as of February 2015, the number of records with missing data had increased by more than 600 since 2012. Since DOD uses weight in determining, among other things, cost estimates for demilitarization projects and what ammunition to demilitarize, missing weight data can negatively impact its efforts to destroy the most ammunition possible with the resources available. The military services have access to information on the CAD stockpile maintained in the Army's database and can search it for useable ammunition that could fill their requirements, but other government agencies do not and DOD does not have a systematic means for sharing such information. Federal internal control standards state that management should ensure there are adequate means of communicating with, and obtaining information from, external stakeholders. DOD officials told GAO that there have been instances of transfers of ammunition to other government agencies, but these have been done informally and on a limited basis. Without a systematic means for regularly sharing information on useable ammunition beyond DOD's needs, both DOD and other agencies may be missing opportunities to reduce costs related to demilitarization and ammunition procurement. GAO recommends DOD develop a plan to identify and record missing weight data and develop a systematic means to share information on the stockpile with other government agencies. DOD agreed with GAO's recommendations.
DOE is responsible for a nationwide complex of facilities created during World War II and the Cold War to research, produce, and test nuclear weapons. Much of the complex is no longer in productive use, but it contains vast quantities of radioactive waste related to the production of nuclear material, such as plutonium-contaminated sludge, and hazardous waste, such as solvents and hazardous chemicals. Since the 1980s, DOE has been planning and carrying out activities around the complex to clean up, contain, safely store, and dispose of these materials. It is a daunting challenge, involving the development of complicated technologies and costing about $220 billion over 70 years or more. DOE has reported completing its cleanup work at 74 of the 114 sites in the complex, but those were small and the least difficult to deal with. The sites remaining to be cleaned up present enormous challenges to DOE. DOE’s cleanup program is carried out primarily under two environmental laws. Under section 120 of CERCLA, EPA must, where appropriate, evaluate hazardous waste sites at DOE’s facilities to determine whether the waste sites qualify for inclusion on the National Priorities List, EPA’s list of the nation’s most serious hazardous waste sites. For each facility listed on the National Priorities List, section 120(e) (2) of CERCLA requires DOE to enter into an interagency agreement with EPA for the completion of all necessary remedial actions at the facility. These agreements often include the affected states as parties to the agreements. These agreements may be known as Federal Facility Agreements or Tri- Party Agreements. Under amendments to RCRA contained in section 105 of the Federal Facility Compliance Act of 1992, DOE generally must develop site treatment plans for its mixed-waste sites. These plans are submitted for approval to states authorized by EPA to perform regulatory responsibilities for RCRA within their borders or to EPA if the state does not have the required authority. Upon approval of the treatment plans, the state or EPA must issue an order requiring compliance with the approved plan. The agreements are generally known as Federal Facility Compliance orders. DOE carries out its cleanup program through the Assistant Secretary for Environmental Management and in consultation with a variety of stakeholders. These include the federal EPA and state environmental agencies, county and local governmental agencies, citizen groups, advisory groups, Native American tribes, and other organizations. In most cases, DOE’s regulators are parties to the compliance agreements. Other stakeholders advocate their views through various public involvement processes including site-specific advisory boards. Compliance agreements in effect at DOE sites can be grouped into three main types (see table 1). Agreements of the first type—those specifically required by CERCLA or by RCRA—are in effect at all of DOE’s major sites. They tend to cover a relatively large number of cleanup activities and have the majority of schedule milestones that DOE must meet. By contrast, agreements that implement court-ordered settlements exist at only a few DOE sites, tend to be focused on a specific issue or concern, and have fewer associated schedule milestones. These agreements are typically between DOE and states. The remaining agreements are based on either federal or state environmental laws and address a variety of purposes, such as cleaning up spills of hazardous waste or remediating groundwater contamination, and have a wide-ranging number of milestones. Most of the milestones DOE must meet are contained in the compliance agreements at its six largest sites—Hanford, Savannah River, Idaho Falls, Rocky Flats, Oak Ridge, and Fernald. These six DOE sites are important because they receive about two-thirds of DOE’s cleanup funding. In all, these sites account for 40 of the agreements and more than 4,200 milestones. DOE reported completing about two-thirds of the 7,186 milestones contained in its compliance agreements as of December 2001. Of the 4,558 milestones completed, about 80 percent were finished by the original due date for the milestone. The remainder of the completed milestones were finished either after the original due date had passed or on a renegotiated due date, but DOE reported that the regulators considered the milestones to be met. DOE’s six largest sites reported completing a total of 2,901 of their 4,262 milestones and met the original completion date for the milestones an average of 79 percent of the time. As table 2 shows, this percentage varied from a high of 95 percent at Rocky Flats to a low of 47 percent at Savannah River. Besides the 1,334 milestones currently yet to be completed, additional milestones will be added in the future. Although DOE has completed many of the milestones on time, for several reasons DOE’s success in completing milestones on time is not a good measure of progress in cleaning up the weapons complex. Specifically: Many of the milestones do not indicate what cleanup work has been accomplished. For example, many milestones require completing an administrative requirement that may not indicate what, if any, actual cleanup work was performed. At DOE’s six largest sites, DOE officials reported that about 73 percent of the 2,901 schedule milestones completed were tied to administrative requirements, such as obtaining a permit or submitting a report. Some agreements do not have a fixed number of milestones, and additional milestones are added over time as the scope of work is more fully defined. For example, one of Idaho Falls’ compliance agreements establishes milestones for remedial activities after a record of decisionhas been signed for a given work area. Four records of decision associated with the agreement have not yet been approved. Their approval will increase the number of enforceable milestones required under that agreement. Many of the remaining milestones are tied to DOE’s most expensive and challenging cleanup work, much of which still lies ahead. Approximately two-thirds of the estimated $220 billion cost of cleaning up DOE sites will be incurred after 2006. DOE has reported that the remaining cleanup activities present enormous technical and management challenges, and considerable uncertainties exist over the final cost and time frame for completing the cleanup. Even though schedule milestones are of questionable value as a measure of cleanup progress, the milestones do help regulators track DOE’s activities. Regulators at the four sites we visited said that the compliance agreements they oversee and the milestones associated with those agreements provide a way to bring DOE into compliance with existing environmental laws and regulations. They said the agreements also help to integrate the requirements under various federal laws and allow regulators to track annual progress against DOE’s milestone commitments. Regulators have generally been flexible in agreeing with DOE to change milestone dates when the original milestone could not be met. DOE received approval to change milestone deadlines in over 93 percent of the 1,413 requests made to regulators. Only 3 percent of DOE’s requests were denied. Regulators at the four sites we visited told us they prefer to be flexible with DOE on accomplishing an agreement’s cleanup goals. For example, they generally expressed willingness to work with DOE to extend milestone deadlines when a problem arises due to technology limitations or engineering problems. Because regulators have been so willing to adjust milestones, DOE officials reported missing a total of only 48 milestones, or about 1 percent of milestones that have been completed. Even in those few instances where DOE missed milestone deadlines and regulators were unwilling to negotiate revised dates, regulators have infrequently applied penalties available under the compliance agreements. DOE reported that regulators have taken enforcement actions only 13 times since 1988 when DOE failed to meet milestone deadlines. These enforcement actions resulted in DOE paying about $1.8 million in monetary penalties, as shown in table 3. In addition to or instead of regulators assessing monetary penalties, several DOE sites agreed to other arrangements valued at about $4 million. For example, for missing a milestone to open a transuranic waste storage facility at the Rocky Flats site, the site agreed to provide a $40,000 grant to a local emergency planning committee to support a chemical-safety-in- schools program. At the Oak Ridge site, because of delays in operating a mixed waste incinerator, site officials agreed to move up the completion date for $1.4 million worth of cleanup work already scheduled. Also, at three sites—Paducah, Kentucky; Lawrence Livermore Main Site, California; and Nevada Test Site, Nevada—the regulators either did not impose penalties for missed milestones or the issue was still under discussion with DOE at the time of our review. The President’s budget submitted to the Congress does not provide information on the amount of funding requested for DOE’s compliance requirements. DOE sites prepare budget estimates that include compliance cost estimates and submit them for consideration by DOE headquarters. However, DOE headquarters officials evaluate individual site estimates and combine them into an overall DOE-wide budget, taking into account broader considerations and other priorities that it must address as part of the give-and-take of the budget process. As a result, the final budget sent to the Congress has summary information on DOE’s programs and activities, but it provides no information on the portion of the budget needed to fund compliance requirements. DOE is not required to develop or present this information to the Congress. The President’s budget typically states that the DOE funding requested is sufficient to substantially comply with compliance agreements, but it does not develop or disclose the total amount of funding needed for compliance. Officials at DOE headquarters told us that budget guidance from the Office of Management and Budget does not require DOE to develop or present information on the cost of meeting compliance requirements, and they said doing so for the thousands of milestones DOE must meet would be unnecessarily burdensome. They said their approach has been to allocate funds appropriated by the Congress and make it the sites’ responsibility to use the funds in a way that meets the compliance agreement milestones established at the site level. Individual DOE sites develop information on the estimated cost of meeting compliance agreements, but the annual estimates are a flexible number. Sites develop these estimates because many of the compliance agreements require DOE to request sufficient funding each year to meet all of the requirements in the agreements. Also, DOE must respond to Executive Order 12088, which directs executive agencies to ensure that they request sufficient funds to comply with pollution control standards. Accordingly, each year DOE’s sites develop budget estimates that also identify the amount needed to meet compliance requirements. The sites’ process in developing these compliance estimates shows that a compliance estimate is a flexible number. For example, two budget estimates typically completed by the sites each year are the “full requirements” estimate and the “target” estimate. The full requirements estimate identifies how much money a site would need to accomplish its work in what site officials consider to be the most desirable fashion. The target estimate reflects a budget strategy based primarily on the amount of funding the site received the previous year and is considered a more realistic estimate of the funding a site can expect to receive. For each of these budget estimates, DOE sites also include an estimate of their compliance costs. As a result of this process, DOE sites usually have at least two different estimates of their compliance costs for the same budget year. Table 4 shows how the compliance cost estimates related to compliance agreements changed under different budget scenarios at four DOE sites. The multiple estimates of compliance costs developed by individual DOE sites indicate that DOE sites have alternative ways of achieving compliance in any given year. DOE site officials said that how much DOE plans to spend on compliance activities each year varies depending on the total amount of money available. Because many of the compliance milestones are due in the future, sites estimate how much compliance activity is needed each year to meet the future milestones. If sites anticipate that less money will be available, they must decide what compliance activities are critical for that year and defer work on some longer-term milestones to future years. On the other hand, if more money is available, sites have an opportunity to increase spending on compliance activities earlier than absolutely necessary. DOE’s compliance agreements focus on environmental issues at specific sites and do not include information on the risks being addressed. As a result, they do not provide a means of setting priorities for risks among sites or a basis for decision-making across all DOE sites. Risk is only one of several factors considered in setting the milestones in compliance agreements. Other factors include the preferences and concerns of local stakeholders, business and technical risk, the cost associated with maintaining old facilities, and the desire to achieve demonstrable progress on cleanup. The schedules for when and in what sequence to perform the cleanup work reflect local DOE and stakeholder views on these and other factors and may not reflect the level of risk. For example, regulators at DOE’s Savannah River site told us that they were primarily concerned that DOE maintain a certain level of effort and they expected DOE to schedule cleanup activities to most efficiently clean up the site. DOE developed a decision model to determine how to allocate its cleanup dollars at Savannah River to achieve this efficiency. A group of outside reviewers assessing the system at the request of site management concluded that the model was so strongly weighted to efficiency that it was unlikely that serious risks to human health or the environment could alter the sequencing of work. DOE officials said they revised the model so that serious risks receive greater emphasis. In response to concerns expressed by the Congress and others about the effectiveness of the cleanup program, DOE has made several attempts to develop a national, risk-based approach to cleanup, but has not succeeded. For example, in 1999, DOE pilot-tested the use of site risk profiles at 10 DOE offices. The profiles were intended to provide risk information about the sites, make effective use of existing data at the sites, and incorporate stakeholder input. However, reviewers found that the site profiles failed to adequately address environmental or worker risks because the risks were not consistently or adequately documented. In 2001, DOE eliminated a support group responsible for assisting the sites with this effort, and the risk profiles are generally no longer being developed or used. A 1999 DOE-funded study to evaluate its efforts to establish greater use of risk-based decision-making concluded that none of the attempts had been successful. Common problems identified by the study included poor documentation of risks and inconsistent scoring of risks between sites. The study reported that factors contributing to the failure of these efforts included a lack of consistent vision about how to use risk to establish work priorities, the lack of confidence in the results by DOE personnel, the unacceptability of the approaches to stakeholders at the sites, and DOE’s overall failure to integrate any of the approaches into the decision- making process. However, the study concluded that the use of risk as a criterion for cleanup decision-making across DOE’s sites not only was essential, it was also feasible and practical, given an appropriate level of commitment and effort by DOE. DOE plans to shift its cleanup program to place greater focus on rapid reduction of environmental risk, signaling yet again the need for a national risk-based approach to cleanup. Without a national, risk-based approach to cleanup in place, DOE’s budget strategy had been to provide stable funding for individual sites and to allow the sites to determine what they needed most to accomplish. However, in a February 2002 report, DOE described numerous problems with the environmental management program and recommended a number of corrective actions. The report concluded that, among other things, the cleanup program was not based on a comprehensive, coherent, technically supported risk prioritization; it was not focused on accelerating risk reduction; and it was not addressing the challenges of uncontrolled cost and schedule growth. The report recommended that DOE, in consultation with its regulators, move to a national strategy for cleanup. In addition, the report noted that the compliance agreements have failed to achieve the expected risk reduction and have sometimes not focused on the highest risk. The report recommended that DOE develop specific proposals and present them to the states and EPA with accelerated risk reduction as the goal. DOE’s new initiative provides additional funds for cleanup reform and is designed to serve as an incentive to sites and regulators to identify accelerated risk reduction and cleanup approaches. DOE’s fiscal year 2003 budget request includes a request for $800 million for this purpose. Moreover, the Administration has agreed to support up to an additional $300 million if needed for cleanup reforms. The set-aside would come from a reduction in individual site funding levels and an increase in the overall funding level for the cleanup program. The money would be made available to sites that reach agreements with federal and state regulators on accelerated cleanup approaches. Sites that do not develop accelerated programs would not be eligible for the additional funds. As a result, sites that do not participate could receive less funding than in past years. To date, at least five major DOE sites with compliance agreements have signed letters of intent with their regulators outlining an agreement in principle to accelerate cleanup—Hanford, Idaho, Los Alamos, Oak Ridge, and Nevada Test Site. However, the letters of intent generally also include a provision that the letters do not modify the obligations DOE agreed to in the underlying compliance agreements. At Hanford, DOE and the regulators signed a letter of intent in March 2002 to accelerate cleanup at the site by 35 years or more. DOE and the regulators agreed to consider the greatest risks first as a principle in setting cleanup priorities. They also agreed to consider, as targets of opportunity for accelerated risk reduction, 42 potential areas identified in a recent study at the site. While accelerating the cleanup may hold promise, Hanford officials acknowledged that many technical, regulatory, and operational decisions need to be made to actually implement the proposals in the new approach. DOE is proceeding with the selection and approval of accelerated programs at the sites, as well as identifying the funding for those accelerated programs. At the same time, DOE is considering how best to develop a risk-based cleanup strategy. DOE’s Assistant Secretary for Environmental Management said that in developing the risk-based approach, DOE should use available technical information, existing reports, DOE’s own knowledge, and common sense to make risk-based decisions. Because DOE’s approach to risk assessment is under development, it is unclear whether DOE will be able to overcome the barriers encountered during past efforts to formalize a risk-assessment process. In the interim, DOE headquarters review teams were evaluating the activities at each site and were qualitatively incorporating risk into those evaluations. Compliance agreements have not been a barrier to previous DOE management improvements, but it is not clear if the agreements will be used to oppose proposed changes stemming from the February 2002 initiative. DOE has implemented or tried to implement a number of management initiatives in recent years to improve its performance and address uncontrolled cost and schedule growth. For example, in 1994, it launched its contract reform initiative; in 1995, it established its privatization initiative; and in 1998, it implemented its accelerated path- to-closure initiative. These initiatives affected how DOE approached the cleanup work, the relationship DOE had with its contractors, and, in some cases, the schedule for completing the work. Based on our review of past evaluations of these initiatives and discussions with DOE officials and regulators at DOE sites, it appears that DOE proceeded with these initiatives without significant resistance or constraints as a result of the compliance agreements. Because DOE’s cleanup reform initiative is in its early stages, and site- specific strategies are only beginning to emerge, it is unclear how the site compliance agreements will affect implementation of DOE’s latest cleanup reforms. For example, it is not yet known how many sites will participate in DOE’s initiative and how many other sites will encounter cleanup delays because of reduced funding. However, early indications suggest caution. Parties to the agreements at the sites we visited were supportive of DOE’s overall efforts to improve management of the cleanup program, but expressed some concerns about proposals stemming from the February 2002 review of the program. They said that they welcome DOE’s efforts to accelerate cleanup and focus attention on the more serious environmental risks because such initiatives are consistent with the regulators’ overall goals of reducing risks to human health and the environment. Most regulators added, however, that DOE generally had not consulted with them in developing its reform initiative and they were concerned about being excluded from the process. Furthermore, they said DOE’s initiative lacked specific details and they had numerous questions about the criteria DOE will use to select sites and the process it will follow at those sites to develop an implementation plan to accelerate cleanup and modify cleanup approaches. Most regulators said they would not view as favorable any attempt by DOE to avoid appropriate waste treatment activities or significantly delay treatment by reducing funding available to sites. In such a case, these regulators are likely to oppose DOE’s initiative. They told us that they most likely would not be willing to renegotiate milestones in the compliance agreements if doing so would lead to delays in the cleanup program at their sites. In addition, these regulators said that if DOE misses the milestones after reducing the funding at individual sites, they would enforce the penalty provisions in the compliance agreements. The effect of compliance agreements on other aspects of DOE’s initiative, especially its proposal to reclassify waste into different risk categories to increase disposal options, is also unclear. Some of the proposed changes in waste treatment would signal major changes in DOE assumptions about acceptable waste treatment and disposal options. For example, one change would eliminate the need to vitrify at least 75 percent of the high- level waste, which could result in disposing of more of the waste at DOE sites. In addition, DOE is considering the possibility of reclassifying much of its high-level waste as low-level mixed waste or transuranic waste based on the risk attributable to its actual composition. However, at all four sites we visited, regulators said that it is unclear how DOE’s proposed initiatives will be implemented, what technologies will be considered, and whether the changes will result in reduced cost and accelerated cleanup while adequately protecting human health and the environment. DOE generally did not seek input from site regulators or other stakeholders when developing its latest initiative. DOE’s review team leader said that when the review team visited individual sites, the team had not formulated its conclusions or recommendations and so did not seek regulators’ views. Furthermore, the team leader said that, during the review, DOE was holding internal discussions about improving ineffective cleanup processes, such as contracting procedures. To include regulators on the review team during these discussions, according to the team leader, could have created the impression that the criticism of DOE processes came from the regulators rather than from DOE and contractor staff. According to the Associate Deputy Assistant Secretary for Planning and Budget, since the review team’s proposals were made public in February, DOE has held discussions with regulators at all sites and headquarters about implementing the proposals. In summary, Mr. Chairman, DOE faces two main challenges in going forward with its initiative. The first is following through on its plan to develop and implement a risk-based method to prioritize its various cleanup activities. Given past failed attempts to implement a risk-based approach to cleanup, management leadership and resolve will be needed to overcome the barriers encountered in past attempts. The second challenge for DOE is following through on its plan to involve regulators in site implementation plans. DOE generally did not involve states and regulatory agencies in the development of its management initiative. Regulators have expressed concerns about the lack of specifics in the initiative, how implementation plans will be developed at individual sites, and about proposals that may delay or significantly alter cleanup strategies. Addressing both of these challenges will be important to better ensure that DOE’s latest management initiative will achieve the desired results of accelerating risk reduction and reducing cleanup costs. Thank you, Mr. Chairman and Members of the Subcommittee. This concludes my testimony. I will be happy to respond to any questions that you may have. For future contacts regarding this testimony, please contact (Ms.) Gary Jones at (202) 512-3841. Chris Abraham, Doreen Feldman, Rich Johnson, Nancy Kintner-Meyer, Tom Perry, Ilene Pollack, Stan Stenersen, and Bill Swick made key contributions to this report.
Compliance agreements between the Department of Energy (DOE) and its regulators specify cleanup activities and milestones that DOE has agreed to achieve. The 70 compliance agreements at DOE sites vary, but can be divided into three main types. These are: (1) agreements specifically required by the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 to address cleanup of federal sites on EPA's national priorities list or by the Resource Conservation and Recovery Act of 1976 to address the management of mixed radioactive and hazardous waste at DOE facilities; (2) court-ordered agreements resulting from lawsuits initiated primarily by states; and (3) other agreements, including state administrative orders enforcing state hazardous waste management laws. DOE reported completing about 80 percent of its milestones by the time originally scheduled in the agreements. The cost of complying with these agreements is not specifically identified in the DOE budget submitted to Congress. Individual DOE sites develop annual compliance cost estimates as part of their budget requests. However, DOE headquarters officials adjust those individual site estimates to reflect national priorities and to reconcile various competing demands. Compliance agreements are site-specific and are not intended to provide a mechanism for DOE to use in prioritizing risks among the various sites. Compliance agreements have not been a barrier to previous DOE management initiatives, but it is not clear if the compliance agreements will be used to oppose DOE's latest initiative to focus on accelerating risk reduction at sites.
An unregulated child custody transfer, commonly referred to as rehoming, is not an adoption. It is a practice in which parents seek new homes for their children and place them without the safeguards and oversight of the courts or the child welfare system. This practice does not pertain exclusively to adopted children; biological children may also be subject to unregulated transfers. However, media reports and child welfare and adoption organizations have focused on unregulated transfers of adopted children that involve families who may be unable or unwilling to deal with the emotional and behavioral challenges that may be caused by a child’s pre-adoption conditions. For example, some adopted children may have histories of long-term institutionalization (e.g., orphanages), abuse, or other traumatic experiences that affect their behavior. An adoption may be terminated as a result of a disruption, which occurs before the adoption is finalized, or a dissolution, which occurs after the adoption has been finalized, generally in a legal proceeding. Under these circumstances, the child would go into the child welfare system or be legally adopted by another family. In contrast, unregulated transfers occur when parents intend to permanently transfer custody of their child to a new family without following these steps. Sometimes the parents will use a document called a power of attorney to delegate to the new family certain authority for the care and control of the child, although such documents do not terminate the legal relationship between the adoptive parents and the child. Because power of attorney arrangements are generally not overseen by any state or federal agency, information on the whereabouts of a child subject to an unregulated transfer using a power of attorney can be limited or unknown. In addition, because families who engage in an unregulated transfer do not follow the steps required for a legally recognized adoption, there may be no checks to ensure that the new home is an appropriate place for the child. There are different ways that a child can be adopted in the United States. International adoptions involve a child who was born in another country. Domestic adoptions can be adoptions from foster care, which involve children in the child welfare system whose biological parents have had their parental rights terminated. Other domestic adoptions include those conducted through private adoption agencies, attorneys, and others. Most domestic adoptions handled through private adoption agencies, attorneys, and others primarily involve infants or adoptions by a stepparent. Unregulated transfers do not follow the adoption process, which generally involves many steps to help ensure that the child is legally adopted and placed in an appropriate and permanent home. While the adoption process can be different depending on the state and type of adoption, it typically consists of: a home study performed by a licensed professional to assess the suitability of the prospective parents, such as their health, finances, and criminal history; an immigration application and petition, in the case of an international pre-adoption training for prospective parents, either online or in- person, for a specified number of hours on topics such as the adoption process and issues related to attachment and bonding; final approval of the adoption by a court, either in the United States or the child’s country of origin; and post-placement or post-adoption services, in some cases, which can range from information and referral services and peer support groups to more intensive services for children with severe behavioral needs. For example, these intensive services can include mental health counseling, respite care programs to provide temporary relief for caregivers by placing children in short-term accommodations outside the home, and residential treatment, which involves extended treatment services to children while they reside outside the home. Multiple federal, state, and other agencies can be involved in different stages of the adoption process, depending on the type of adoption. Fees also vary by type of adoption; while foster care adoptions may not have any fees, international adoptions can involve substantial financial investments from families. International adoptions. As required under federal law and State Department regulations, international adoptions are generally conducted through accredited adoption agencies or approved persons. USCIS is involved in adjudicating immigration petitions for these children as well as setting federal home study requirements for international adoptions and determining the suitability and eligibility of prospective adoptive parents. The State Department also sets requirements for pre-adoption training that international adoption agencies and approved persons must provide for prospective parents. There are no federal requirements for post- adoption monitoring for international adoptions, according to State Department officials. However, officials said some countries of origin require adoptive families to provide periodic reports (e.g., according to the State Department’s website, one country requires families to provide reports every 6 months for 2 years following an international adoption). Individual states may also have separate licensing requirements for international adoption agencies operating in their state. Foster care adoptions. Foster care adoptions are typically conducted by state, county, and local child welfare agencies or private adoption agencies with which they contract. For these adoptions, states set requirements for home studies, pre-adoption training, and post-adoption services. Private domestic adoptions. States also set requirements for home studies, pre-adoption training, and post-adoption services for private domestic adoptions, generally through state licensing standards and other requirements for private adoption agencies, attorneys, and others. Some federal funding is available for adoption services, in addition to any funding from state, local, or other sources. Funding appropriated for Title IV-E of the Social Security Act makes up the large majority of federal funding dedicated to child welfare, comprising about 89 percent of federal child welfare appropriations (approximately $7.4 billion of nearly $8.3 billion) in fiscal year 2015, according to the Congressional Research Service. While the majority of these Title IV-E funds support children in the foster care system, the Title IV-E Adoption Assistance program provides grants to states for a portion of their costs to support families who adopted children with special needs, generally from foster care. For example, states provide ongoing monthly Adoption Assistance payments (subsidies) to eligible families that can be used to help pay for the costs of care for the child, which might include therapy and other post-adoption services. Funds appropriated for this program totaled about $2.5 billion in fiscal year 2015, comprising about 34 percent of Title IV-E program funding. In addition, Title IV-B of the Social Security Act, which is the primary source of federal child welfare funding available for child welfare services, also provides funds that states can use to support adoptions by any family. For example, states may use funds to support pre- and post- adoption services, although funds can also be used for a variety of other purposes to keep children safe and in stable families. Federal appropriations for Title IV-B comprised about 8 percent of dedicated federal child welfare appropriations (approximately $664 million of nearly $8.3 billion) in fiscal year 2015. Table 1 provides a summary of federal child welfare funding that states can use for adoption services, including programs under Title IV-E and IV-B of the Social Security Act. In addition to these programs, states may use savings generated from changes made to the eligibility criteria for the Title IV-E Adoption Assistance program for adoption services. These changes made additional children eligible for federal Title IV-E Adoption Assistance payments, thereby potentially freeing up state funds previously used for this purpose. The Preventing Sex Trafficking and Strengthening Families Act requires states to use 30 percent of these savings for post- adoption and related services. In addition, states may use different combinations of federal funds not specifically dedicated to child welfare to support adoption services, such as funds available under the Temporary Assistance to Needy Families block grants, Medicaid, and Social Services Block Grants. While states can use federal funds to support adoption services for families, we reported in January 2013 that federal funding for services designed to prevent children from entering foster care—such as adoption support services—can be limited. HHS does not collect information on how much states spend in federal funds specifically for post-adoption services. In addition, our prior work has shown that some states may not have information on the extent to which they use these federal funds for adoption services. Although states are to use savings generated from changes to the Title IV-E Adoption Assistance program for child welfare services, we reported in May 2014 that only 21 states reported calculating these savings for fiscal year 2012, and 20 states reported difficulties performing the calculations. In 2014, the Donaldson Adoption Institute attempted to collect information on states’ annual post-adoption service budgets, excluding Title IV-E Adoption Assistance program subsidies. However, it reported that some states were unable to distinguish this budget item, especially when the primary programs that served adoptive families also served other families. It also reported that states with county-administered child welfare programs were unable to report total state budgets for post-adoption services. The Institute reported that annual budgets for these services ranged from $85,000 to $11.2 million in the 21 states that provided responses to the survey it conducted. International adoptions in the United States have changed over time from a system that predominantly involved the adoption of infants and toddlers to one that has involved an increasing proportion of older children and those with special needs. According to State Department data, less than 8 percent of children adopted internationally in fiscal year 2013 were younger than 1 year compared to over 40 percent in fiscal year 2004. In addition, one study reported in 2013 that nearly half of more than 1,000 parents surveyed who adopted internationally said their children had diagnosed special needs. The State Department, HHS, and others have reported that the changing landscape of international adoptions is attributable to many different factors, including positive cultural factors and socio-economic conditions in other countries that have made it easier for biological families to take care of their children or to adopt domestically—decisions that have impacted the number of children eligible for adoption by U.S. families. About 7,000 children were adopted internationally in fiscal year 2013 compared to nearly 23,000 in fiscal year 2004 (see fig. 1). Children in foster care may also be more likely to have special needs than children in the general population. According to a national survey conducted in 2008 and 2009, more than 42 percent of children ages 18 months to 17 years who were placed in a foster family home following an investigation of abuse and neglect were found to be at risk for an emotional or behavioral problem and potentially in need of mental health services. Multiple studies have shown that abuse and other maltreatment can cause changes in the brain development of children, and these changes may leave them more vulnerable to depression, post-traumatic stress disorder, and other behavioral or mental health issues. Studies show that children who are institutionalized—for example, in orphanages prior to being adopted by a family—are often subject to deprivation and neglect. Young children with a history of institutional care often show poor attention, hyperactivity, difficulty with emotion regulation, elevated levels of anxiety, and increased rates of attachment disorders. For example, they may develop Reactive Attachment Disorder, which is characterized by serious problems in emotional attachments to others. The physical, emotional, and social problems associated with this disorder may persist as the child grows older. Families who adopt children with severe behavioral or mental health issues may face situations which can put the family in crisis. For example, the adopted child may be violent toward siblings or parents. One study reported in 2014 that in 23 percent of cases where adoptions were dissolved, the adopted child was a threat to the safety of other children in the home. Families may choose an unregulated child custody transfer because they were not sufficiently prepared for the challenges they experienced in their adoption, according to many child welfare and adoption stakeholders we interviewed. This lack of preparation may include inadequate information about the child’s health, an insufficient home study to make a good match, and minimal pre-adoption training for parents. Many stakeholders we interviewed—including officials from selected states, child welfare and adoption organizations, and adoption agencies— expressed concern with the adequacy of the information provided to prospective parents on the behavioral and mental health conditions of a child adopted internationally. Access to accurate information is critical to ensuring that a family is aware of the type of ongoing support they may need for the child. However, officials from 11 of 19 child welfare and adoption organizations and 5 of 15 adoption agencies said families who adopt internationally often do not receive complete information on a child’s medical and behavioral needs before adopting. State Department officials explained that some low-income countries lack sufficient mental health care providers, making it difficult for international adoption agencies to ensure that children are accurately evaluated prior to adoption. USCIS officials also said some countries do not allow prospective adoptive parents to review medical history documents until after an adoption is finalized for privacy reasons. Many stakeholders also expressed concern that families may not have undergone an adequate home study to ensure they are a good match for their adopted child, and several noted that the home study is a critical point in the pre-adoption process, when social workers or adoption agency staff try to determine how families will handle challenges when parenting their adopted child. According to HHS officials, requirements for what should be assessed during a home study are determined by individual states for foster care adoptions. Home study requirements are determined by USCIS and the State Department for international adoptions. However, officials from 4 of 7 selected states and 8 of the 15 adoption agencies we interviewed expressed concerns about inconsistencies in the quality of home studies conducted by child welfare and adoption agencies across states. For example, Ohio officials said all child welfare and adoption agencies in their state are required to use a detailed home study format. They said they may not accept home studies conducted in other states that have less stringent requirements unless additional supporting documentation is provided, such as a background check and safety check of the home. Families also may not have received sufficient or targeted pre-adoption training to ensure they were prepared for their child’s specific needs, particularly for international adoptions, according to most stakeholders we interviewed. For foster care adoptions, states each set their own training requirements for prospective parents, according to HHS officials. About half of all states require agencies facilitating these adoptions to provide prospective parents with at least 27 hours of training, according to data obtained from HHS officials in May 2015. Our seven selected states have requirements of 18 to 47 hours of training for foster care adoptions with some in-person required training in each state, according to state officials. Many of our selected states also use similar training models for foster care adoptions, including Parent Resources for Information, Development, and Education (PRIDE) and Model Approach to Partnerships in Parenting (MAPP), which were developed by various child welfare organizations. In contrast, State Department regulations require 10 hours of training for international adoptions, all of which can be online. This training must cover topics defined by the federal regulations. Officials we interviewed from 5 of our selected states, 12 child welfare and adoption organizations, and 11 adoption agencies told us that this training may be insufficient, particularly since an increasing proportion of children adopted internationally are older and have special needs due to an extensive history of institutionalization and trauma. State Department officials told us they are considering revisions to pre-adoption training requirements for international adoptions, which we discuss later in the report. States may set training requirements for international adoptions above the 10-hour minimum or may have required training topics. Two of our seven selected states require more than 10 hours of training, according to state officials. For example, Wisconsin officials told us the state requires 18 hours of training, and the same topics are required for international and foster care adoptions. This training covers issues such as attachment in adoptive placement, the effects of abuse and neglect, and cultural sensitivity. In addition, this training includes opportunities to cover issues specific to the individual child (see table 2). State Department officials said international adoption agencies may also have their own training requirements beyond those of federal and state agencies. For example, officials from one international adoption agency said they require 30 hours of training for parents wishing to adopt abroad. This includes training on grief and loss, the child’s country of origin and cultural differences, the impact of institutionalization, and potential challenges and service needs. These officials said this expanded training is more costly for both the agency and prospective parents, and that some prospective parents thought the training was too cumbersome or expensive. Officials in most of the selected states, child welfare and adoption organizations, and adoption agencies we interviewed expressed concern that families may choose an unregulated transfer when they cannot access post-adoption services to help them cope with or avoid reaching a crisis point in their adoption. Several of these stakeholders explained that an adopted child may deal with continuing issues of attachment, identity, and loss of previous caregivers or biological parents. While services to help adoptive families can include information, referrals, and peer support groups, families who adopted children with severe behavioral needs may need more intensive services, such as mental health counseling, respite care, and residential treatment. Many stakeholders we interviewed suggested that families considering unregulated transfers may particularly need these intensive services. All seven of our selected states provide some kind of post-adoption services for families who adopted from foster care and internationally. For example, Wisconsin officials said the state provides parent training, a 24-hour information hotline, referral services, and mechanisms to link families to support groups and mentors, which are available to all adoptive families. Other types of services these selected states provide include lending libraries, newsletters, and brochures for parents. However, the seven selected states offered limited intensive services, particularly for international adoptions, according to our analysis of the information gathered from selected state officials. Officials from three states said their state offers counseling and other intensive services, such as case management and crisis intervention, to both families who adopted from foster care and internationally. However, officials from the six states that offer respite care and the four states that provide residential treatment told us their states provide these services exclusively to families who adopted from foster care. Some of these services have maximum time limits or are offered on a case-by-case basis. For example, Louisiana officials said their state offers respite care for up to 1 month, and Florida and Illinois officials said their states offer residential treatment services to families who adopted from foster care on a case-by-case basis. In addition, our seven selected states provide varying levels of financial support to eligible adoptive families through subsidies and cash assistance programs, according to the information gathered from selected state officials. For example, Ohio officials described a state program that uses Title IV-B and state revenue funds to provide up to $10,000 per child per year to pay service providers in 2014, with an additional $5,000 available per year if the child is recommended for residential treatment by a mental health provider. In addition, all of our selected states received federal funds under the Title IV-E Adoption Assistance program to provide subsidies to eligible adoptive families; the maximum subsidy amounts ranged from $400 to $2,700 per month in 2014. However, they are generally only available to eligible families who adopted children with special needs from foster care, and information is limited on how much families use their subsidies for services, such as counseling, versus other expenses for their adopted child, such as food, clothing, and day care. The Donaldson Adoption Institute reported in April 2014 on a variety of post-adoption services provided by 49 states that responded to survey questions about such services. It found that about one-third of these states offered almost no post-adoption services other than a subsidy for adoptive families. In addition, the report found that the majority of these states had services that were open exclusively to families who adopted from foster care. Officials in four of our seven selected states told us that the need for post- adoption services exceeded the funding available from state and federal programs. Our prior work has shown that child welfare agencies have struggled to meet the service needs of families. Our 2013 report found that local child welfare officials in four states we reviewed reported service gaps in multiple areas, including counseling and mental health services. We also reported that state and local child welfare agencies may face difficult decisions when determining which activities—aimed at preserving families and preventing a child from entering foster care—to prioritize and fund, particularly in light of the ongoing fiscal challenges these agencies face. Similar to our selected states, officials from 12 of the 15 adoption agencies we interviewed said they provide some level of post-adoption services to families, such as information and referrals. Officials in 4 of the 15 adoption agencies said they provide intensive services, ranging from trauma-focused therapy to a weekend respite care program. Officials from six adoption agencies noted that resource constraints have affected their ability to provide post-adoption services. Officials from the Council on Accreditation—the organization responsible for accrediting agencies for international adoptions—said some international adoption agencies have struggled to maintain their businesses due to the decrease in the number of international adoptions overall (a decrease of 70 percent between fiscal years 2003 and 2014). They said while some larger agencies have been better able to provide services because they are financially stable, this can be a challenge for other agencies. Another limitation to accessing post-adoption services that many stakeholders expressed concern about was the cost of intensive services, which can be expensive for all families. Officials in 3 of 7 selected states, 6 of 19 child welfare and adoption organizations, and 5 of the 15 adoption agencies we interviewed said services can be expensive, particularly intensive services such as mental health counseling and residential treatment. We have previously reported that the cost to support a youth in a residential setting can amount to thousands of dollars per month. In addition to cost, adoptive families may have challenges finding mental health providers that are “adoption competent”—that is, knowledgeable about adoption-related issues, according to officials from five selected states, seven child welfare and adoption organizations, and eight adoption agencies. These stakeholders said mental health providers who do not understand issues unique to adoptive families will likely be less effective in helping these families work through issues. For example, one official told us adoptive families need therapists who can distinguish between normal adolescent behavior and a child acting out due to grief and loss resulting from his or her adoption. Several stakeholders also noted that families in rural areas may have even more difficulty accessing effective mental health providers. We reported in 2013 that a Florida behavioral health service provider had been advertising a child psychiatrist position for 5 years without success. In a 2011 report, we found that child psychiatrists and psychologists were among the most difficult specialist referrals to obtain for children in low-income families covered by Medicaid and the Children’s Health Insurance Program, both of which can cover children adopted from foster care and internationally. Lastly, families may not know about available services from their child welfare or adoption agency, and therefore do not seek help when needed, according to officials from four selected states and five adoption agencies. For example, Virginia officials said families that did not adopt from foster care may not know about support services they can access through their local child welfare agency. Wisconsin officials also said they struggle to find sufficient resources to provide outreach to all adoptive parents about state resources. Officials from two selected states also raised concerns that families may not remember whether their adoption agency provides post-adoption services. They explained that some families may not need services for years after an adoption is final because issues may not arise until the child reaches adolescence. By that point, families may no longer have contact with their adoption agency. Families in need of help may be reluctant to ask child welfare agencies for assistance, according to officials from three child welfare and adoption organizations and four adoption agencies. For example, these officials noted that there is a stigma associated with contacting child welfare agencies since those agencies are also generally responsible for investigating cases of child abuse. A few of these officials further noted that families, including those who adopted from foster care and internationally, may fear that contacting an agency will prompt an investigation into how they care for all of their children. They also said families may be afraid that they will not be able to adopt again if they are involved with a child welfare agency. Officials in five of our seven selected states acknowledged the dilemma that families face if they contact child welfare agencies for services. In addition, officials in one selected state said parents cannot voluntarily relinquish custody of a child in their state (e.g., for care or services) without being charged with child abandonment. Officials in all seven selected states said families who decide to relinquish custody to the state may be required to pay ongoing child support. Similarly, families who adopted internationally may also be hesitant to reach out to their adoption agency. Representatives from 9 of the 15 adoption agencies we interviewed told us that families may be ashamed or embarrassed to contact the agency to discuss problems. Representatives from one adoption agency explained that families have gone through a rigorous home study process to prove that they will provide a good home to an adopted child. Thus, they said these families may be reluctant to contact their agency and admit that that they are facing challenges in their adoptions. Because unregulated child custody transfers are an underground practice that happens outside the purview of the courts and the child welfare system, they are difficult to track, and no federal agency keeps statistics on their occurrence. These transfers may involve an exchange of a power of attorney that may not be filed with or approved by a court of law, although it may be signed by both parties and notarized. State laws vary, but generally a parent may use a power of attorney to temporarily grant another person certain powers regarding their child’s care and physical custody, such as the authority to make medical and educational decisions. For example, a military service member may sign a power of attorney to allow a family member or friend to take care of and make medical decisions for his or her child while he or she is deployed. However, because a power of attorney does not terminate the legal parent-child relationship, the adoptive parent still retains certain rights and responsibilities. For example, according to HHS, delegating responsibility for a child through a power of attorney does not insulate adoptive parents from state laws regarding imminent risk of serious harm. State laws determine any time limits (e.g., 1 year) for grants of power of attorney, and also establish the procedures required to make such an arrangement effective. For example, officials in three of our seven selected states told us their state laws do not require power of attorney documents to be approved by a court, and officials in one selected state said their laws require court approval in certain circumstances. However, officials in three of these selected states said they were not aware of any mechanisms in their states to track expired power of attorney documents to determine if families are attempting to use them to permanently transfer custody. Unregulated transfers are also difficult to track because many adoptions are not monitored after the adoption is finalized. For those international adoptions subject to reporting requirements set by individual countries, reporting may occur for a limited time. For example, according to the State Department website, one country requires adoptive parents to provide information about the adoption at certain time intervals for the first 2 years. Officials from the State Department and several adoption agencies we interviewed told us that while parents may sign a contract when they adopt a child saying they will report the required information to the adoption agency, parents may not comply with post-adoption reporting requirements, and agencies have little leverage to enforce compliance. In addition, officials in our seven selected states said their state does not specifically monitor whether adopted children remain with their families after the adoption is finalized. Our observations of forums on social media websites indicate that some parents have been using these venues to seek new homes for their children. We observed posts in five social media forums and found a total of 23 posts in which a person wrote that they were seeking a new family for their child. Among the 9 posts that included information on a child’s age, those ages ranged from 7 to 16. Generally, parents in these forums who said they wanted to transfer a child indicated that they were in distress or crisis, and most often said they were seeking a new home because of the child’s behavioral issues or severe mental illness. These children included those who were adopted from foster care and internationally. For example, one post asked for a new home for a 7-year- old boy who had been diagnosed with numerous mental illnesses, including Reactive Attachment Disorder, Oppositional Defiance Disorder, and autism, and who was physically abusive to his siblings and family pets. Several posters responded with information about their family and location or said that they had sent the poster a private message. Another poster wrote that her son, who she adopted internationally, had been diagnosed with multiple mental illnesses and was currently hospitalized for psychiatric reasons, and she was seeking a new home for him. In addition, we found 40 cases in which a person posted that they wanted to adopt a child. In some cases, posters wrote that they had successfully completed a home study. In other cases it was not clear whether they had undergone a home study. For example, only a third of the posts we observed in one online forum referenced a home study—either that the person seeking to adopt had completed one or the person seeking a new home for the child required one. Some posters said they had adopted children already in the home, and some wrote they had adopted a previously adopted child, although it was unclear whether they had legally adopted the child or whether the child was transferred without court oversight. It is possible that conversations on the specifics of transferring a child were held either through private messages within the social media platform or by another means, such as email or phone. Because we did not investigate these posts further and because discussions between online participants can be continued privately, we were unable to determine whether a child was actually transferred to another family. Similarly, we were unable to determine, if such a transfer occurred, whether it was done through official means or an unregulated transfer. We identified 15 states in which laws were enacted, proposed legislation was introduced, or recent changes had been made to child welfare programs that were intended to safeguard children who may be subject to unregulated transfers. These included the seven states we selected for interviews as well as eight states recommended by representatives from child welfare and adoption organizations because of legislative activity initiated in these states during the course of our review. Of these 15 states, 7 enacted legislation and 3 made changes to child welfare programs. In addition, legislators in 10 of the 15 states introduced proposed legislation that had not been enacted as of July 2015 (see table 3). These selected laws, proposed legislation, and other actions within the 15 states reflect a variety of approaches to addressing unregulated transfers. The most common approaches were to criminalize unregulated transfers or actions that may lead to these transfers, and to restrict the advertising of children or potential homes for placement. Other approaches may deter unregulated transfers by requiring that parents or certain other individuals report cases in which custody of a child may have been transferred. Some approaches may help prevent transfers from occurring. These included revising requirements for preparing prospective parents for adoption and increasing outreach about services available to families after adopting (see table 4). The five states that enacted laws to criminalize unregulated transfers or actions that could lead to these transfers made the following changes: Arkansas and Louisiana enacted laws that define the practice of “re- homing” and impose criminal penalties for those engaging in it. The laws provide that those who commit the offense of re-homing, which each state defines differently but generally includes transferring physical custody of a child to a non-relative without court approval with the intent of avoiding permanent parental responsibility (or assisting in such a transfer), will be subject to a fine of up to $5,000 and imprisonment for up to 5 years. Similarly, Florida enacted a law establishing the crime of “unlawful desertion of a child,” which provides that a caregiver who deserts a child (leaves the child with a non-relative with the intent to not return and provide for the child’s care) under circumstances in which the caregiver knew or should have known that the child would be exposed to unreasonable risk of harm commits a third degree felony. Maine also enacted a similar law, modifying its definition of “abandonment of a child.” This law provides that a person is guilty of child abandonment if they transfer physical custody of a child to a non-relative without court approval with the intent to avoid or divest themselves of permanent parental responsibility. The law specifies that violation of this provision constitutes different classes of crimes, depending on the age of the child. Wisconsin enacted a law that placed parameters on parental delegations made through a power of attorney, and established criminal penalties for unauthorized transfers of children across state lines. This law provides that delegations to a non-relative of a child’s care and custody under a power of attorney may be effective for no longer than 1 year unless approved by a juvenile court, and those who violate this provision are subject to a fine of up to $10,000 and/or imprisonment for up to 9 months. In addition, the law states that any person who sends a child out of the state, brings a child into the state, or causes such actions to occur for the purpose of permanently transferring physical custody of the child to a non-relative is guilty of a misdemeanor. Six states enacted laws to restrict the advertising of children or potential homes for adoption or other permanent placement. Specifically, Arkansas, Colorado, Florida, Louisiana, Maine, and Wisconsin created or expanded prohibitions on who can place such advertisements, limited the purposes for which these advertisements can be placed, restricted the public media that can be used (e.g., the internet), and/or provided penalties for violations. Officials from selected states, child welfare and adoption organizations, and adoption agencies we interviewed discussed some trade-offs and considerations in implementing these approaches to deterring unregulated transfers. For example, several stakeholders said a power of attorney can be used for legitimate purposes, such as a military parent transferring custody of their child to a trusted friend while on deployment. They noted that placing additional conditions on power of attorney transfers can create a burden for these families. In addition, officials from three selected states and three child welfare and adoption organizations questioned how states could enforce the use of a power of attorney. Officials from one national organization specializing in adoption law said courts that may be involved in approving power of attorney agreements have other priorities and may not have time to monitor these agreements. Several stakeholders also said families often go online to access adoption resources and peer support forums. They said states need to consider the information that these online forums provide to adoptive families when considering laws related to the internet. In addition to approaches that would deter unregulated transfers, 4 of the 15 states we reviewed enacted laws or made changes to child welfare programs to improve post-adoption services for families. Specifically: Arkansas enacted a law that directed the state child welfare agency to adopt rules to ensure that post-adoptive services are provided to all parents who seek assistance to prevent their adoptions from being disrupted. Virginia enacted a law and made changes to its state child welfare programs to improve post-adoption services based on recommendations from a study it conducted on unregulated transfers. The law requires the state registrar to issue, along with new adoptive birth certificates, a list of available post-adoption services, and requires the state child welfare agency to provide a list of such services to the registrar and publish it on its website. In addition, Virginia officials said the state child welfare agency plans to modify the solicitation for its post-adoption services contracts to allow services to be provided by multiple regional providers rather than one statewide provider. Virginia officials said the intent of this change is to increase access to services for families statewide. Illinois and New York also made changes to their child welfare programs to increase outreach specifically to new parents who adopted from foster care, although these states did not make statutory changes. Illinois developed a pilot project for agencies facilitating foster care adoptions to host celebrations and social events to build relationships with these families and connect them with other families. New York developed a brochure for adoption agencies to provide to new adoptive parents that includes information on unregulated transfers and possible sources of help with post-adoption needs. While many stakeholders we spoke with highlighted families’ challenges with accessing pre- and post-adoption services as key reasons for unregulated transfers, they also commented on possible challenges in implementing certain policy options to improve access to and availability of such services. For example, officials from nearly half of the child welfare and adoption organizations we spoke with said building a strong infrastructure for adoption services can be a lengthy and costly task. They said states have been trying to bolster services, but have had limited success. Given limited funding, officials from most selected states, child welfare and adoption organizations, and adoption agencies we interviewed expressed concern about the level of support for post- adoption services. Many of these stakeholders said families experiencing difficulties in their adoptions need services, and unregulated transfers are a last resort for desperate families who feel they have no other option. They also stated that improving access to effective services may ultimately help all families meet the needs of their adopted children. Federal agencies have made some collaborative and individual efforts to address unregulated transfers, mainly by raising awareness of the need for improved pre- and post-adoption services and by sharing information with states (see table 5). In some instances they have also collaborated with non-governmental organizations that have relationships with state child welfare and law enforcement agencies, such as the Association of Administrators of the Interstate Compact on the Placement of Children and the National Association of Attorneys General. As shown in table 5, the State Department established an interagency working group in October 2013 to develop a coordinated federal response to unregulated transfers. Other federal agency participants are USCIS, HHS, and Justice. With input from the group, the State Department began work to revise regulations regarding international pre-adoption training requirements. State Department officials said the revisions may potentially include an increased number of minimum required hours and additional required content, drawing from training curriculum used by child welfare agencies for prospective parents in foster care adoptions. In addition, the revisions may include required in-person components for training. State Department officials said they plan to provide proposed revisions to the Office of Management and Budget by the end of 2015 for review, and the proposed regulations will be subject to a public comment period before being finalized. In addition, in February 2015, USCIS issued revised immigration applications and petitions which are used by certain families applying to adopt from certain countries. The revisions included a requirement that families disclose whether they have previously filed international adoption applications or petitions and the result of the filings (i.e., approval, denial, withdrawal). Additionally, the revisions require families to disclose if they have experienced a disruption or dissolution of an international adoption in the past. HHS has also taken a number of actions to help improve access to adoption services. For example, it issued a memorandum in May 2014 to states that encouraged them to promote services to all adoptive families and outlined various sources of available federal funds. The memo also shared information on how unregulated transfers may violate state laws and encouraged states to review their laws and policies. In addition, HHS awarded two cooperative agreements with 5-year project periods in October 2014 to national organizations to improve post-adoption services. The National Adoption Competency Mental Health Training Initiative aims to build a web-based training curriculum for child welfare professionals and mental health practitioners to meet the mental health needs of adopted children, develop a national certification process for those completing it, and evaluate its outcomes and effectiveness. The National Quality Improvement Center for Adoption/Guardianship Support and Preservation aims to develop evidence-based pre- and post-adoption interventions and services for prospective and current adoptive families. Interventions and services will be evaluated at six to eight selected sites (e.g., state, county, or tribal child welfare agencies). Both projects are expected to be completed in September 2019. HHS officials also noted that information on pre-adoption requirements and post-adoption services, by state, is available on HHS’s Child Welfare Information Gateway, a website that provides information, resources, and tools on child welfare, child abuse and neglect, out-of-home care, adoption, and other topics. In addition, they said HHS has been involved in discussions with states regarding post-adoption services over the years. For example, HHS hosted a conference on the needs of adopted children—including post-adoption services—in August 2012, and was involved in a forum on unregulated transfers and services for adoptive families in February 2014 through the National Association of State Adoption Programs, Inc. Because states are responsible for much of the work to improve adoption services, the interagency working group has collaborated with national organizations to share information with states. Specifically, Justice worked with the National Association of Attorneys General to gather information on existing state laws and pending legislative proposals to address unregulated transfers. Research fellows at the National Association compiled this information for all states. The organization also requested information from all state attorneys general offices, and received responses from six states and the District of Columbia. The organization completed this work in June 2015, and Justice officials said they are reviewing the study and will work with the interagency working group to determine next steps, if any, to be taken. In addition, the Association of Administrators of the Interstate Compact on the Placement of Children is working to develop a national outreach campaign to raise awareness about unregulated transfers and provide information on alternatives to this practice. Officials from the Association said they are in the process of soliciting funds from private and non-profit organizations to support such a campaign. Despite these efforts, federal officials acknowledged that gaps in services for adoptive families remain, and determining how to provide them is a difficult task for public and private agencies working with these families. For example, HHS officials noted limitations to the federal government’s ability to support post-adoption services. They said that while all adopted children will need some level of support after an adoption is final, the main source of federal support—the Title IV-E Adoption Assistance program—is limited, and is generally available only to families who adopted eligible children from foster care. Consistent with our findings in previous reports, HHS officials said funds from other federal programs that states can use to support services for private adoptions, including international adoptions, are limited. Officials said families who cannot afford services on their own must often rely on services supported by state and local funding or those provided by private adoption agencies, and funds from these sources are also limited. HHS officials told us that the administration included in its fiscal year 2016 budget request a legislative proposal that would provide an increase of $587 million over 10 years for pre- and post-adoption services. They said this funding would target services to families with children who may be subject to unregulated transfers as well as those at risk of entering foster care due to an adoption in crisis. Federal officials said they will continue to examine ways to address unregulated transfers. For example, the State Department has developed a charter to outline its goals and plans for future work. State Department officials said they will use this charter to facilitate future efforts with the interagency working group. We provided a draft of this report to the Secretaries of Health and Human Services, Homeland Security, and State and the Attorney General of the United States for review and comment. The Departments of Health and Human Services, Homeland Security, and State provided technical comments that were incorporated, as appropriate. The Department of Justice had no comments. We are sending copies of this report to relevant congressional committees, the Secretaries of Health and Human Services, Homeland Security, and State, the Attorney General of the United States, and other interested parties. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7215 or brownke@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix II. GAO examined (1) the reasons adoptive families consider unregulated child custody transfers, and services that exist to support these families before they take such an action; (2) what is known about the prevalence of these transfers; and (3) actions selected states and federal agencies have taken to help address such transfers. To address these objectives, we used a variety of methods. Specifically, we conducted interviews with 45 agencies and organizations, including officials from federal and selected state agencies, child welfare and adoption organizations, and adoption agencies, to acquire a range of perspectives on this topic; reviewed relevant federal laws and regulations, selected state laws, and federal and selected state policies; reviewed and analyzed documentation provided by officials we interviewed; conducted a search of related literature and reviewed relevant articles; and searched online forums on selected social media sites to find illustrative examples of families who may be considering unregulated transfers. Because children adopted domestically as infants and those in biological families may be less likely to have mental health issues due to trauma and institutionalization, and reports of unregulated transfers have primarily pertained to children adopted internationally or from foster care, our report focuses on international and foster care adoptions. To understand why families consider unregulated child custody transfers, what training and services are available to adoptive families, and actions selected states and federal agencies have taken to help address such transfers, we conducted interviews with 45 agencies, states, and organizations, including federal officials, representatives from national child welfare and adoption organizations, officials from selected states, and representatives from adoption agencies. Federal officials we interviewed included those from the Department of State (State Department), the Department of Homeland Security’s U.S. Citizenship and Immigration Services (USCIS), the Department of Health and Human Services (HHS), and the Department of Justice (Justice). We interviewed representatives from 19 organizations that work on child welfare and adoption issues. The 19 organizations we interviewed were selected to represent a variety of views on adoption and child welfare- related policy, training, and research. For example, these organizations specialized in certain aspects of adoption, including adoption law, home studies, pre-adoption training, and post-adoption services. We interviewed the following child welfare and adoption organizations and experts: American Academy of Adoption Attorneys; American Bar Association’s Center on Children and the Law; Association of Administrators of the Interstate Compact on the Placement of Children; Center for Adoption Policy; Center for Adoption Support and Education; Child Welfare League of America; Coalition for Children, Youth, and Families; Congressional Coalition on Adoption Institute; Council on Accreditation; the Donaldson Adoption Institute; Joint Council on International Children’s Services; Madeline Freundlich; Maureen Flatley; National Center for Missing and Exploited Children; National Center on Adoption and Permanency; National Conference of State Legislatures; North American Council on Adoptable Children; Spaulding for Children; and Voice for Adoption. In addition, we interviewed officials from state child welfare agencies and other relevant offices in seven selected states: Colorado, Florida, Illinois, Louisiana, Ohio, Virginia, and Wisconsin. These states were chosen based on factors such as legislative activity related to unregulated transfers in the state, as identified by representatives from child welfare and adoption organizations during our initial interviews, and the state’s post-adoption programs. These states also provided variety in numbers of adoptions in relation to the state’s population. Interviews with officials were conducted through site visits to Florida and Wisconsin, and phone calls to the remaining states. In the states selected, the team conducted interviews with officials from state child welfare agencies and other relevant offices, such as those from state attorney general offices, departments of justice, and adoption agency licensing offices. Finally, we interviewed representatives from 15 international and domestic adoption agencies. The adoption agencies we interviewed were selected from those either recommended by national organization representatives or those licensed or accredited in the states we visited in- person to achieve variation in agency size, including budget and staff and types of adoptions facilitated. For example, 11 of the 15 adoption agencies facilitate international adoptions. The remaining 4 agencies facilitate domestic adoptions only, such as through the child welfare system (through a contract with the state child welfare agency) or privately. In the report we refer to different types of organizations when reporting information from our interviews with the 7 selected states, 19 child welfare and adoption organizations, and 15 adoption agencies. References to “stakeholders” include responses from officials in all three of these groups. In our interviews with stakeholders, we used a semi-structured interview protocol that included open-ended questions about reasons that families may consider unregulated transfers, types of services adoptive families may need to prevent them from resorting to these transfers, and types of services that are available to adoptive families. Information was volunteered by officials in each interview in response to these open- ended questions. Thus, the counts of organizations citing such responses vary. “All” stakeholders represents 41 “Most” stakeholders represents 21-40 “Many stakeholders” represents 10-20 “Several” stakeholders represents 4-9 “A few” stakeholders represents 2-3 We reviewed relevant documents to corroborate information obtained in our interviews. To examine federal efforts related to unregulated transfers, we reviewed relevant documents obtained in our interviews with federal officials. We also reviewed relevant federal laws, regulations, and policies on agency roles and responsibilities as well as GAO criteria on internal controls. To examine selected state efforts related to unregulated transfers, we reviewed information on recently enacted laws, proposed legislation, and other documents provided by child welfare and other agency officials in our seven selected states. Through our interviews with representatives from child welfare and adoption organizations and others, we identified at least eight additional states that had initiated legislative activity related to unregulated transfer since we began our review: Arkansas, Maine, Maryland, Massachusetts, Nebraska, New York, North Carolina, and South Carolina. For these eight identified states, we also reviewed relevant laws, proposed legislation, and other documents provided by child welfare and other agency officials in these states. For proposed legislation, we reviewed only the version confirmed by the state officials. We did not do further research on the status of these proposals; therefore, additional changes may have been made that are not reflected in this report, and some proposed legislation included in the report may no longer be pending. We asked officials in the 15 selected and identified states to confirm whether their state had enacted a law, introduced proposed legislation, or took other relevant activity as of July 2015. We did not report on such activity after this date. Since we did not attempt to identify all activity related to unregulated transfers in all states, there may be other states with relevant legislative or other activity not included in our review. We conducted a search of literature related to unregulated child custody transfers in order to gather information about why families may consider these transfers, what policies exist to safeguard children who might be subject to such transfers, what training is required to adopt, and what services are available to adoptive families. While our search resulted in some literature on adoption dissolutions and disruptions as well as services for adoptive families, we were unable to locate academic literature regarding unregulated transfers. We searched online forums on selected social media sites to find illustrative examples of families who may be considering unregulated child custody transfers. Using keywords such as “rehoming” and “adoption disruption,” we searched selected social media sites to locate online forums—such as groups and message boards—that parents might use to seek new homes for their children. For example, these forums were characterized on the sites as support groups for parents who wish to dissolve an adoption or whose children have behavioral issues. The results of our searches were not exhaustive as we were unable to ascertain whether we identified most or all social media sites and forums with online activity that may relate to unregulated child custody transfers. We observed posts by participants in eight forums on two websites over a 15-month time period (January 1, 2014, through April 1, 2015). We analyzed posts on two of the eight forums that involved individuals who posted that they were seeking a new family for their child or who posted that they wanted to adopt a child. We did not find posts involving individuals seeking a new family for their child in the remaining six forums. The online posts we identified did not provide sufficient information to determine whether the posters intended to pursue an unregulated transfer, or to pursue an adoption or other legal placement. Since we did not investigate individual cases, our approach did not allow us to determine whether the information posted by online participants was accurate. Moreover, because discussions between online participants can be continued privately, we were unable to determine whether a child was actually transferred to another family and, if so, whether this was done through a court-approved process or through an unregulated transfer. One of the eight forums we observed was shut down in March 2015 by the social media site that hosted it. We conducted this performance audit from October 2014 to September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact name above, the following staff members made key contributions to this report: Elizabeth Morrison, Assistant Director; Elizabeth Hartjes; Nhi Nguyen; and Amy Sweet. Also contributing to this report were: Susan Aschoff; Laurel Beedon; Maurice Belding; Sarah Cornetto; Sara Edmondson; Kirsten Lauber; Ashley McCall; Mimi Nguyen; Brynn Rovito; and Almeta Spencer.
Parents have the legal responsibility to protect and care for their children. However, recent media reports have illuminated a practice involving unregulated custody transfers of adopted children. Commonly referred to as “rehoming,” this practice involves parents who turn to the internet or other unregulated networks to find a new home for their child. These media reports found instances in which adopted children were placed in dangerous situations where they were harmed by the adults who received them. GAO was asked to review issues related to unregulated transfers of adopted children. GAO examined (1) the reasons adoptive families consider unregulated child custody transfers, and services that exist to support these families before they take such an action; (2) what is known about the prevalence of these transfers; and (3) actions selected states and federal agencies have taken to address such transfers. GAO reviewed relevant federal laws, regulations, and policies and selected state laws and proposed legislation. GAO also interviewed officials from federal agencies, 19 child welfare and adoption organizations, 15 adoption agencies, and 7 states selected primarily because of legislative activity on unregulated transfers. GAO also searched online activity on selected social media sites to find illustrative examples of families who may be considering unregulated transfers. The Departments of Health and Human Services, Homeland Security, and State provided technical comments. The Department of Justice had no comments. Some adoptive families may consider giving their children to another family outside of the courts and child welfare system—an “unregulated child custody transfer”—because of a crisis within the adoptive family and difficulties accessing support services, according to officials GAO interviewed from selected states, child welfare and adoption organizations, and adoption agencies. Children adopted internationally or from foster care may need special care or counseling because of a history of institutionalization and trauma. Some parents, particularly those who adopted internationally, may not be prepared to deal with their adopted child's complex needs. Federal regulations require agencies facilitating international adoptions to provide parents with at least 10 hours of pre-adoption training. In contrast, about half of the states require agencies facilitating foster care adoptions to provide at least 27 hours of training, according to data obtained from Department of Health and Human Services (HHS) officials in May 2015. Many officials said adoptive parents may experience challenges finding mental health services for their families, such as therapists familiar with adoption issues. Many officials also said parents who adopt children with more severe needs may have difficulty finding and paying for intensive services such as residential treatment, which can cost thousands of dollars per month. Officials said these challenges may lead families to seek out unregulated transfers. Little is known about the prevalence of unregulated transfers. Because they happen without any oversight, these transfers are difficult to track and no federal agency keeps statistics on their occurrence. GAO's observations of social media sites found that some parents have been using online forums to seek new homes for their adopted children. During a 15-month period, GAO identified 23 instances in which a parent posted that they were seeking a new family for their child. Because GAO did not investigate these posts and because discussions between online participants can be continued privately, GAO was unable to determine whether these participants intended to pursue a legal placement or an unregulated transfer, or whether such a transfer actually took place. Selected states and federal agencies have taken some steps to address unregulated transfers. GAO identified at least 15 states in which there was legislative and other activity in recent years intended to address these transfers. Seven of the 15 states had enacted legislation and 3 made changes to state child welfare programs as of July 2015. The most common approaches were criminalizing unregulated transfers or actions that may lead to these transfers, and restricting the advertisement of children for placement. In addition, activity in several states involved improving post-adoption services, which many officials said was a key need for families who resort to unregulated transfers. However, federal officials and others said addressing service needs can be difficult and time-consuming, and funding for these services is limited. At the federal level, several agencies established an interagency working group on unregulated transfers in October 2013. Officials from the Department of State said they plan to revise international pre-adoption training requirements that may include an increased number of minimum hours. HHS issued a memorandum in May 2014 encouraging states to promote post-adoption services and to review their policies to address unregulated transfers.
In fiscal year 2007, the Department of Veterans Affairs (VA) paid about $37.5 billion in disability compensation and pension benefits to more than 3.6 million veterans and their families. Through its disability compensation program, the VBA pays monthly benefits to veterans with service- connected disabilities (injuries or diseases incurred or aggravated while on active military duty). Monthly benefit amounts vary according to the severity of the disability. Through its pension benefit program, VBA pays monthly benefits to wartime veterans with low incomes who are either elderly or permanently and totally disabled for reasons not service- connected. In addition, VBA pays dependency and indemnity compensation to some deceased veterans’ spouses, children, and parents and to survivors of servicemembers who died while on active duty. When a veteran submits a benefits claim to any of VBA’s 57 regional offices, a Veterans Service Representative (VSR) is responsible for obtaining the relevant evidence to evaluate the claim. For disability compensation benefits, such evidence includes veterans’ military service records, medical examinations, and treatment records from VA medical facilities and private providers. Once a claim is developed (i.e., has all the necessary evidence), a Rating Veterans Service Representative (RVSR) evaluates the claim, determines whether the claimant is eligible for benefits, and assigns a disability rating based on degree of impairment. The rating determines the amount of benefits the veteran will receive. For the pension program, claims processing staff review the veteran’s military, financial, and other records to determine eligibility. Eligible veterans receive monthly pension benefit payments based on the difference between their countable income, as determined by VA, and the maximum pension amounts as updated annually by statute. In fiscal year 2007, VBA employed over 4,100 VSRs and about 1,800 RVSRs to administer the disability compensation and pension programs’ caseload of almost 3.8 million claims. In 2001 the VA Claims Processing Task Force noted that VSRs were responsible for understanding almost 11,000 separate benefit delivery tasks, such as tasks in claims establishment, claims development, public contacts, and appeals. To improve VBA’s workload controls, accuracy rates, and timeliness, the Task Force recommended that VA divide these tasks among a number of claims processing teams with defined functions. To that end, in fiscal year 2002, VBA developed the Claims Processing Improvement model that created six claims processing teams, based on phases of the claims process. (See table 1.) According to one VA official, new claims processing staff generally begin as VSRs and typically have a probationary period of about one year. After their probationary period ends, staff can either continue to qualify to become senior VSRs or apply for RVSR positions. VSRs are also given the option to rotate to other VSR claim teams to gain a broader understanding of the claims process. VBA has established a standardized curriculum for training new VSRs and RVSRs on how to process claims, and it has an 80-hour annual training requirement for both new and experienced staff; however, it does not hold individual staff accountable for meeting this requirement. VBA has designed a uniform curriculum for training new VSRs and RVSRs that is implemented in three phases—initial orientation training, a 3-week training session referred to as centralized training, and comprehensive on- the-job and classroom training after centralizing training. It also requires all staff to meet an annual 80-hour training requirement. To ensure that staff meet this requirement, each regional office must develop an annual training plan, which can contain a mix of training topics identified by VBA central office and by the regional office. However, individual staff members are not held accountable for meeting their training requirement. VBA has a highly structured, three-phased program for all new claims processors designed to deliver standardized training, regardless of training location or individual instructors. (See fig. 1.) For example, each topic included in this training program contains a detailed lesson plan with review exercises, student handouts, and copies of slides used during the instructor’s presentation. Each phase in this program is designed to both introduce new material and reinforce material from the previous phase, according to a VBA official. According to VBA policy, the first phase of training for new VSRs and RVSRs is prerequisite training. New VSRs and RVSRs begin prerequisite training at their home regional office as soon as they begin working. Prerequisite training lays the foundation for future training by introducing new VSRs to topics such as the software applications used to process and track claims, medical terminology, the system for maintaining and filing a case folder, and the process for requesting medical records. Although VBA specifies the topics that must be covered during prerequisite training, regional offices can choose the format for the training and the time frame. New VSRs and RVSRs typically spend 2 to 3 weeks completing prerequisite training in their home office before they begin the second program phase, centralized training. During what is referred to as centralized training, new VSRs and RVSRs spend 3 weeks in intensive classroom training. Participants from multiple regional offices are typically brought together in centralized training sessions, which may occur at their home regional office, another regional office, or the Veterans Benefits Academy in Baltimore, Maryland. According to VBA officials in three of the four offices we visited, bringing together VSRs and RVSRS from different regional offices helps to promote networking opportunities, while VBA officials from two of these offices also stated that it provides a nationwide perspective on VBA. Centralized training provides an overview of the technical aspects of the VSR and RVSR positions. Training instructors should follow the prescribed schedule and curriculum dictating when and how material is taught. For example, for a particular topic, the instructor’s guide explains the length of the lesson, the instructional method, and the materials required; lays out the information that must be covered; and provides exercises to review the material. (See fig. 2 for a sample of an instructor’s guide from the centralized training curriculum.) Centralized training classes have at least three instructors, but the actual number can vary depending on the size of the group. VBA’s goal is to maintain a minimum ratio of instructors to students. The first week of centralized training for VSRs focuses on key concepts, such as security, privacy and records management; terminology; and job tools, such as the policy manual and software applications. The final 2 weeks of training focus on the different roles and responsibilities of VSRs on the Pre-determination and Post-determination teams in processing claims. To practice processing different types of claims and processing claims from start to finish, VSRs work on either real claims or hypothetical claims specifically designed for training. Centralized training for new RVSRs—many of whom have been promoted from the VSR position— focuses on topics such as systems of the human body, how to review medical records, and how to interpret a medical exam. According to staff in one site we visited, RVSRs new to VBA also take VSR centralized training or its equivalent to learn the overall procedures for processing claims. To accommodate the influx of new staff it must train, in fiscal year 2007 VBA substantially increased the frequency of centralized training and is increasing student capacity at the Veterans Benefits Academy. During fiscal year 2007, VBA held 67 centralized training sessions for 1,458 new VSRs and RVSRs. Centralized training sessions were conducted at 26 different regional offices during fiscal year 2007, in addition to the Veterans Benefits Academy. By comparison, during fiscal year 2006, VBA held 27 centralized training sessions for 678 new claims processors. To implement centralized training, VBA relies on qualified regional office staff who have received training on how to be an instructor. According to VBA officials, centralized training instructors may be Senior VSRs, RVSRs, supervisors, or other staff identified by regional office managers as having the capability and the right personality to be effective instructors. Potential instructors have certain training requirements. First, they must complete the week-long Instructor Development Course, which covers the ways different adults learn, the process for developing lesson plans, and the use of different training methods and media. During this course, participants are videotaped and given feedback on their presentation style. In addition, each time instructors teach a centralized training session, they are supposed to take the 2.5 day Challenge Curriculum Course, designed to update instructors on changes to the curriculum and general training issues. Between October 2006 and February 2008, about 250 VSRs and RVSRs from regional offices completed the Instructor Development Course, and VBA officials reported that, given the influx of new VSRs and RVSRs, they are increasing the number of times this course is offered in order to train more instructors. Instructors can teach centralized training sessions in their home office, another regional office, or the Veterans Benefits Academy. When new VSRs and RVSRs return to their home office after centralized training, they are required to begin their third phase of training, which is supposed to include on-the-job, classroom, and computer-based training, all conducted by and at their regional office. In the regional offices we visited, managers indicated that new VSRs and RVSRs typically take about 6 to 12 months after they return from centralized training to complete all the training requirements for new staff. During this final phase, new claims processing staff cover more advanced topics, building on what they learned in centralized training. Under the supervision of experienced claims processors, they work on increasingly complex types of real claims. On-the-job training is supplemented in the offices we visited by regular classroom training that follows a required curriculum of courses developed by VBA’s Compensation and Pension Service, specifically for new VSRs and RVSRs. For example, new VSRs might complete a class in processing burial claims and then spend time actually processing such claims. The amount of time spent working on each type of claim varies from a couple of days to a few weeks, depending on the complexity of the claim. On-the-job training is also supposed to be supplemented with modules from the Training and Performance Support System (TPSS), an interactive on-line system that can be used by staff individually or in a group. TPSS modules provide detailed lessons, practice cases, and tests for VSRs and RVSRs. Modules for new VSRs cover topics such as burial benefits and medical terminology; RVSR modules cover topics such as the musculoskeletal system, general medical terminology, and introduction to post-traumatic stress disorder. A policy established by VBA’s Compensation and Pension Service requires both new and experienced VSRs and RVSRs to complete a minimum of 80 hours of technical training annually, double the number VBA requires of its employees in other technical positions. VBA officials said this higher training requirement for VSRs and RVSRs is justified because their jobs are particularly complex and they must work with constantly changing policies and procedures. The 80-hour training requirement has two parts. At least 60 hours must come from a list of core technical training topics identified by the central office of the Compensation and Pension Service. For example, core topics for VSRs in fiscal year 2007 included establishing veteran status and asbestos claims development; topics for RVSRs included due process provisions and eye-vision issues. VBA specifies more core topics than are necessary to meet the 60-hour requirement, so regional offices can choose those topics most relevant to their needs. They can also choose the training method used to address each topic, such as classroom or TPSS training. (See app. II for the list of core technical training topics for fiscal year 2007.) Regional offices determine the training topics that are used to meet the remaining 20 hours, based on local needs and input. Regional offices may select topics from the list of core technical training topics or identify other topics on their own. The four regional offices we visited varied in the extent to which they utilized their discretion to choose topics outside the core technical training topics in fiscal year 2007. Two sites selected the required 60 hours of training from the core requirements and identified their own topics for the remaining 20 hours. In the other two sites, almost all the training provided to staff in fiscal year 2007 was based on topics from the list of core requirements. An official in one regional office, for example, said that his office used its full 20 hours to provide training on new and emerging issues that are not covered by the core technical training topics, as well as training to address error prone areas. An official in another regional office said the core requirements satisfied staff training needs in fiscal year 2007, possibly because this regional office had a large proportion of new staff and the core topics are focused on the needs of new staff. Regional offices must develop training plans each year that indicate which courses will actually be provided to staff to enable them to meet the 80- hour training requirement. The training plan is a list of courses that the regional office plans to offer throughout the year, as well as the expected length and number and types of participants in each course. In the regional offices we visited, when managers develop their training plans, they solicit input from supervisors of VSRs and RVSRs and typically also consider national or local error trend data. Regional offices must submit their plans to the VBA central office at the beginning of each fiscal year for review and feedback. Central office officials review the plans to determine whether (1) the regional office will deliver at least 60 hours of training on the required core topics, (2) the additional topics identified by the regional office are appropriate, and (3) staff in similar positions within an office receive the same level and type of training. According to central office officials, they provide feedback to the regional offices on their current plans as well as guidance on what topics to include in the next year’s training plans. Regional offices can adjust their training plans throughout the year to address shifting priorities and unexpected training needs. For example, a regional office may add or remove courses from the plan in response to changing trends in errors or policy changes resulting from legal decisions. (See app. III for excerpts from the fiscal year 2007 training plans from the regional offices we visited.) While regional offices have discretion over the methods they use to provide training, the four offices we visited relied primarily on classroom training in fiscal year 2007. In each of these offices, at least 80 percent of the total fiscal year 2007 training hours completed by all claims processors was in the form of classroom instruction (see fig. 3). Officials in two of the regional offices we visited said they used lesson plans provided by the Compensation and Pension Service and adapted these plans to the needs of their staff; one regional office developed its own courses. An official in one office said they sometimes invite guest speakers, and an official in another regional office said that classroom training is sometimes delivered as part of team meetings. The offices we visited generally made little use of other training methods. Only one office used TPSS for its training more than 1 percent of the time. Two offices used self-instruction—such as reading memos from VBA central office—for about 10 percent of their training, and no office used videos for more than 1 percent of their training. The central office usually communicates immediate policy and regulatory changes through memos called Fast Letters, which may be discussed in team meetings or may just be read by staff individually. Because the agency has no policy outlining consequences for individual staff who do not complete their 80 hours of training per year, individual staff are not held accountable for meeting their annual training requirement, and at present, VBA central office lacks the ability to track training completed by individual staff members. According to VBA officials, however, the agency is in the process of implementing an automated system that should allow it to track the training each staff member completes. Officials reported that this system is expected to be implemented during fiscal year 2008. VBA officials reported that this system will be able to record the number of training hours and the courses completed for each individual, staff position, and regional office. One official said the central office and regional office supervisors will have the ability to monitor training completed by individual staff members, but that central office will likely not monitor the training completed by each individual staff member, even though it may monitor the training records for a sample of staff members. Furthermore, despite the absence of a VBA- wide tracking system, managers in two of the regional offices we visited reported using locally developed tracking methods to determine the number of training hours their staff had completed. While individuals are not held accountable, VBA reported taking some steps to ensure that staff complete the required number of training hours. VBA central office periodically reviews the aggregated number of training hours completed at each regional office to determine whether the office is on track to meet the training requirement. According to a VBA official, managers in offices where the staff is not on track to complete 80 hours of training during the year can be reprimanded by a higher-level manager, and if their staff do not meet the aggregate training hours at the end of the fiscal year, managers could face negative consequences in their performance assessments. VBA is taking steps to strategically plan its training for VSRs and RVSRs including the establishment of a training board to assess VBA’s training needs. VBA has also made some effort to evaluate its training for new staff, but does not require regional offices to collect feedback from staff on any of the training they provide. Although some regional offices collect some training feedback, it is not shared with VBA central office. Both new and experienced staff we interviewed did, in fact, report some problems with their training. A number of new staff raised issues with how consistently their training curriculum was implemented. Experienced staff differed in their assessments of the VBA’s annual training requirement, with some indicating they struggle to meet this requirement because of workload pressures or that training topics are sometimes redundant or not relevant to their position. VBA is taking steps to strategically plan its training for claims processors, in accordance with generally accepted practices identified by GAO. (See app. I for a detailed description of these generally accepted practices.) VBA has made an effort to align training with the agency’s mission and goals. According to VBA documents, in fiscal year 2004 an Employee Training and Learning Board (board) was established to ensure that training decisions within the VBA are coordinated; support the agency’s strategic and business plans, goals and objectives; and are in accordance with the policy and vision of VBA. Some of the board’s responsibilities include establishing training priorities and reviewing regional office and annual training plans. VBA has identified the skills and competencies needed by VBA’s claims processing workforce. VBA developed a decision tree and task analysis of the claims process, which GAO experts in the field of training told us made it possible to understand and map both the claims process and the decisions associated with it that supported the development of VBA’s training curriculum. VBA is taking steps to determine the appropriate level of investment in training and prioritize funding. According to VBA documents, some of the board’s responsibilities include developing annual training budget recommendations and identifying and recommending training initiatives to the Under Secretary of Benefits. VBA officials also reported developing several documents that made a business case for different aspects of VBA’s training, such as VA’s annual budget and the task analysis of the VSR and RVSR job positions. According to one VBA official, the agency identifies regulatory, statutory, and administrative changes as well as any legal or judicial decisions that affect how VBA does business and issues guidance letters, or Fast Letters, which can be sent out several times a year, to notify regional offices of these changes. Also, as a result of Congress authorizing an increase in its number of full-time employees and VBA’s succession planning efforts, VBA has increased the number of centralized training sessions for new staff and has also increased the number of Instructor Development Courses offered to potential centralized training instructors. As a result, VBA is taking steps to consider government reforms and initiatives to improve its management and performance when planning its training. According to accepted practices, federal agencies should also evaluate their training programs and demonstrate how these efforts help employees, rather than just focusing on activities or processes (such as number of training participants or hours of training). VBA has made some efforts to evaluate its training for claims processors. During the 3-week centralized training session for new staff, VBA solicits daily feedback from participants using forms that experts in the training field consider well- constructed and well-balanced. According to one GAO expert, the forms generally employ the correct principles to determine the effectiveness of the training and ascertain whether the instructor effectively presented the material (see fig. 4). VBA officials told us that they have used this feedback to improve centralized training for new staff. Management at one regional office cited the decision to separate training curricula for VSRs on Pre- determination teams and VSRs on Post-determination teams as an example of a change based on this feedback. Although VBA evaluates centralized training, it does not require regional offices to obtain feedback from participants on any of the training they provide to new and experienced staff. In a previous GAO report, VA staff told us that new training materials they develop are evaluated before being implemented. However, none of the regional offices we visited consistently collect feedback on the training they conduct. Supervisors from three of the regional offices we visited told us that they collect feedback on some of the training their office conducts, but this feedback largely concerns the performance of the instructor. Participants are generally not asked for feedback on course content. Moreover, regional offices we visited that do, to some degree, collect feedback do not share this information with VBA. According to GAO experts in the training field, VBA’s training curriculum for new staff appears well designed. VBA’s curriculum for new staff conforms to adult learning principles, carefully defining all pertinent terms and concepts, and providing abundant and realistic examples of claims work. GAO experts also determined that VBA’s training for those who teach the curriculum for new staff was well designed and would enable experienced claims processors to become competent trainers because they are coached on teaching theory and have multiple opportunities to practice their teaching skills and receive feedback. Many of the new staff at all four sites we visited reported that centralized training provided them with a good foundation of knowledge and prepared them for additional training conducted by their regional office. Also, regional office managers from three offices we visited told us that centralized training affords new staff the opportunity to network with other new staff at different regional offices, which imbues a sense of how their positions fit in the organization. However, some staff reported that VBA’s implementation of their centralized training was not always consistent. A number of staff at three regional offices reported that during their centralized training the instructors sometimes taught different ways of performing the same procedures or disagreed on claim procedures. Regional office officials told us that while centralized training instructors attempt to teach consistently through the use of standardized training materials, certain procedures can be done differently in different regional offices while adhering to VBA policy. For example, regional offices may differ on what to include in veteran notification letters. VBA officials also told us that centralized training conducted at the regional offices may not be as consistent as centralized training conducted at the Veterans Benefits Academy. According to these officials, unlike the regional offices, the Veterans Benefits Academy has on-site training experts to guide and ensure that instructors are teaching the curriculum consistently. New staff also gave mixed assessments about how training was conducted at their home office after they returned from centralized training. While some staff at all of the regional offices we visited told us that the additional training better prepared them to perform their jobs, with on-the- job training identified as a useful learning tool, others told us that the training could not always be completed in a timely manner due to regional office priorities. Some management and staff at two of the regional offices we visited reported that, because of workload pressures, some of their RVSRs had to interrupt their training to perform VSR duties. Also, a few new staff indicated that VBA’s TPSS was somewhat difficult to use. Although TPSS was developed to provide consistent technical training designed to improve the accuracy of claims ratings, a number of staff at all of the regional offices we visited reported that TPSS was too theoretical. For example, some staff said it provided too much information and no practical exercises in applying the knowledge. Some staff also noted that certain material in TPSS was out-of-date with policy changes such as how to order medical examinations. Some staff at three of the regional offices also reported that TPSS was not always useful in training staff, in part, because TPSS does not use real cases. Three of the regional offices reported using TPSS for less than 1 percent of their training and VSRs at one regional office were unaware of what TPSS was. At all of the regional offices we visited, staff we spoke with generally noted that training enables them to keep up-to-date on changes in laws and regulations as well as provides opportunities for obtaining refresher training on claims procedures they perform infrequently. However, regional office staff we spoke with differed in their assessment of the 80- hour requirement. Some regional office staff said the number of training hours required was appropriate, while others suggested that VBA adopt a graduated approach, with the most experienced staff being required to complete fewer hours than new staff. VBA officials told us that, in 2007, the Compensation and Pension Service reviewed their annual training requirements and determined the 80-hour annual training requirement was appropriate. However, the officials we spoke with could not identify the criteria that were used to make these determinations. Furthermore, VBA management does not systematically collect feedback from staff evaluating the usefulness of the training they must receive to meet this requirement. Consequently, when determining the appropriateness of the 80-hour requirement, VBA has not taken into account the views of staff to gauge the effect the requirement has on them. Experienced staff had mixed views on training provided by the regional office. Staff at three regional offices said the core technical training topics set by the Compensation and Pension Service are really designed for newer staff and do not change much from year to year, and therefore experienced staff end up repeating courses. Also, a number of staff at all of the regional offices we visited told us some regional office training was not relevant for those with more experience. Conversely, other regional office staff note that although training topics may be the same from year to year, a person can learn something new each time the course is covered. Some VBA officials and regional office managers also noted that some repetition of courses is good for several reasons. Staff may not see a particular issue very often in their day-to-day work and can benefit from refreshers. Also, regional office managers at one office told us that the core technical training topics could be modified to reflect changes in policy so that courses are less repetitive for experienced staff. Many experienced staff also reported having difficulty meeting the 80-hour annual training requirement due to workload pressures. Many of the experienced staff we spoke with, at each of the regional offices we visited, told us that there is a constant struggle between office production goals and training goals. For example, office production goals can affect the availability of the regional office’s instructors. A number of staff from one regional office noted that instructors were unable to spend time teaching because of their heavy workloads and because instructors’ training preparation hours do not count toward the 80-hour training requirement. Staff at another regional office told us that, due to workload pressures, staff may rush through training and may not get as much out of it as they should. The elements used to evaluate individual VSRs’ and RVSRs’ performance appear to be generally aligned with VBA’s organizational performance measures, something prior GAO work has identified as a well-recognized practice for effective performance management systems (see app. I). Aligning individual and organizational performance measures helps staff see the connection between their daily work activities and their organization’s goals and the importance of their roles and responsibilities in helping to achieve these goals. VSRs must be evaluated on four critical elements: quality, productivity, workload management, and customer service. RVSRs are evaluated on quality, productivity, and customer service. In addition, VBA central office requires regional offices to evaluate their staff on at least one non-critical element. The central office has provided a non-critical element called cooperation and organizational support, and although regional offices are not required to use this particular element, all four offices we visited did so (see table 2). For each element, there are three defined levels of performance: exceptional, fully successful, or less than fully successful. Table 2 refers only to the fully successful level of performance for each element. Three critical elements in particular—quality, workload management, and productivity—are aligned with VBA’s organizational performance measures (see table 3). According to VA’s strategic plan, one key organizational performance measure for VBA is overall accuracy in rating disability claims. This organizational measure is aligned with the quality element for VSRs and RVSRs, which is assessed by measuring the accuracy of their claims-processing work. An individual performance element designed to motivate staff to process claims accurately should, in turn, help VBA meet its overall accuracy goal. Two other key performance measures for VBA are the average number of days that open disability claims have been pending and the average number of days it takes to process disability claims. VSRs are evaluated on their workload management, a measure of whether they complete designated claims- related tasks within specific deadlines. Individual staff performance in this element is linked to the agency’s ability to manage its claims workload and process claims within goal time frames. Finally, a performance measure that VBA uses to evaluate the claims-processing divisions within its regional offices—and that, according to VBA, relates to the organization’s overall mission—is production, or the number of compensation and pension claims processed by each office in a given time period. Individual VSRs and RVSRs are evaluated on their productivity, i.e., the number of claims-related tasks they complete per day. Higher productivity by individual staff should result in more claims being processed by each regional office and by VBA overall. Providing objective performance information to individuals helps show progress in achieving organizational goals and allows individuals to manage their performance during the year by identifying performance gaps and improvement opportunities. Regional offices are supposed to use the critical and non-critical performance elements to evaluate and provide feedback to their staff. Supervisors are required to provide at least one progress review to their VSRs and RVSRs each year, indicating how their performance on each element compares to the defined standards for fully successful performance. In the offices we visited, supervisors typically provide some feedback to staff on a monthly basis. For example, VSRs in the Atlanta regional office receive a memo on their performance each month showing their production in terms of average weighted actions per day, their accuracy percentage based on a review of a sample of cases, and how their performance compared to the minimum requirements for production and accuracy. If staff members fall below the fully successful level in a critical element at any time during the year, a performance improvement plan must be implemented to help the staff member improve. Performance elements related to collaboration or teamwork can help reinforce behaviors and actions that support crosscutting goals and provide a consistent message to all employees about how they are expected to achieve results. VSR and RVSR performance related to customer service is evaluated partly based on whether any valid complaints have been received about a staff member’s interaction with their colleagues. And performance related to the cooperation and organizational support element is based on whether staff members’ interaction with their colleagues is professional and constructive. Competencies, which define the skills and supporting behaviors that individuals are expected to exhibit to carry out their work effectively, can provide a fuller assessment of an individual’s performance. In addition to elements that are evaluated in purely quantitative terms, VBA uses a cooperation and organizational support element for VSRs and RVSRs that requires supervisors to assess whether their staff are exhibiting a number of behaviors related to performing well as a claims processor. Actively involving employees and stakeholders in developing the performance management system and providing ongoing training on the system helps increase their understanding and ownership of the organizational goals and objectives. For example, VA worked with the union representing claims processors to develop an agreement about its basic policies regarding performance management. Also, VBA indicated that it planned to pilot revisions to how productivity is measured for VSRs in a few regional offices, partly so VSRs would have a chance to provide feedback on the changes. Clear differentiation between staff performance levels is also an accepted practice for effective performance management systems. Systems that do not result in meaningful distinctions between different levels of performance fail to give (1) employees the constructive feedback they need to improve, and (2) managers the information they need to reward top performers and address performance issues. GAO has previously reported that, in order to provide meaningful distinctions in performance for experienced staff, agencies should use performance rating scales with at least three levels, and scales with four or five levels are preferable because they allow for even greater differentiation between performance levels. If staff members are concentrated in just one or two of multiple performance levels, however, the system may not be making meaningful distinctions in performance. VA’s performance appraisal system has the potential to clearly differentiate between staff performance levels. Each fiscal year, regional offices give their staff a rating on each critical and non-critical performance element using a three-point scale—exceptional, fully successful, or less than fully successful. Based on a VA-wide formula, the combination of ratings across these elements is converted into one of VA’s five overall performance levels: outstanding, excellent, fully successful, minimally satisfactory, and unsatisfactory (see fig. 5). Regional offices may award financial bonuses to staff on the basis of their end-of-year performance category. Prior to fiscal year 2006, VA used two performance levels—successful and unacceptable—to characterize each staff member’s overall performance. To better differentiate between the overall performance levels of staff, VA abandoned this pass-fail system in that year, choosing instead to use a five-level scale. However, there is evidence to suggest that the performance management system for VSRs and RVSRs may not clearly or accurately differentiate among staff’s performance. VBA central office officials and managers in two of the four regional offices we visited raised concerns with VA’s formula for translating ratings on individual performance elements into an overall performance rating. These officials said that under this formula it is more difficult for staff to be placed in certain overall performance categories than others, even if staff’s performance truly does fall within one of those categories. Indeed, at least 90 percent of all claims processors in the regional offices we visited were placed in either the outstanding or the fully successful category in fiscal year 2007. (Fig. 6 shows the distribution of overall performance ratings for claims processors in each office.) Central and regional office managers noted that, in particular, it is difficult for staff to receive an overall rating of excellent. Managers in one office said there are staff whose performance is better than fully successful but not quite outstanding, but under the formula it is difficult for these staff to be placed in the excellent category as the managers feel they should be. An excellent rating requires exceptional ratings in all the critical elements and a fully successful rating in at least one non-critical element. However, according to staff we interviewed, virtually all staff who are exceptional in the critical elements are also exceptional in all non-critical element(s), so they appropriately end up in the outstanding category. On the other hand, the overall rating for staff who receive a fully successful rating on just one of the critical elements—even if they are rated exceptional in all the other elements—drops down to fully successful. Managers in one regional office commented that the system would produce more accurate overall performance ratings if staff were given an overall rating of excellent when they had, for example, exceptional ratings on three of five overall elements and fully successful ratings on the other two. An official in VA’s Office of Human Resources Management acknowledged that there may be an issue with the agency’s formula. Although neither VBA nor VA central office officials have examined the distribution of VSRs and RVSRs across the five overall performance ratings, VA indicated it is considering changes to the system designed to allow for greater differentiation in performance ratings. For example, one possible change would be to use a five-point scale for rating individual elements—probably mirroring the five overall performance rating categories of outstanding, excellent, fully successful, minimally satisfactory, and unsatisfactory— rather than the current three-point scale. Under the proposed change, a staff member who was generally performing at the excellent but not outstanding level could get excellent ratings in all the elements and receive an overall rating of excellent. This change must still be negotiated with several stakeholder groups, according to the VA official we interviewed. In many ways, VBA has developed a training program for its new staff that is consistent with accepted training practices in the federal government. However, because VBA does not centrally evaluate or collect feedback on training provided by its regional offices, it lacks the information needed to determine if training provided at regional offices is useful and what improvements, if any, may be needed. Ultimately, this information would help VBA determine if 80 hours of training annually is the right amount, particularly for its experienced staff, and whether experienced staff members are receiving training that is relevant for their positions. Identifying the right amount of training is crucial for the agency as it tries to address its claims backlog. An overly burdensome training requirement needlessly may take staff away from claims processing, while too little training could contribute to processing inaccuracies. Also, without collecting feedback on regional office training, VBA may not be aware of issues with the implementation of its TPSS, the on-line training tool designed to ensure consistency across offices in technical training. Setting aside the issue of how many hours of training should be required, VBA does not hold its staff accountable for fulfilling their training requirement. As a result, VBA is missing an opportunity to clearly convey to staff the importance of managing their time to meet training requirements as well as production and accuracy goals. With the implementation of its new learning management system, VBA should soon have the ability to track training completed by individual staff members, making it possible to hold them accountable for meeting the training requirement. As with its training program for VSRs and RVSRs, the VA is not examining the performance management system for claims processors as closely as it should. VBA is generally using the right elements to evaluate its claims processors’ performance, and the performance appraisals have the potential to give managers information they can use to recognize and reward higher levels of performance. However, evidence suggests the formula used to place VSRs and RVSRs into overall performance categories may not clearly and accurately differentiate among staff’s performance levels. Absent additional examination of the distribution of claims processors among overall performance categories, VA lacks a clear picture of whether its system is working as intended and whether any adjustments are needed. The Secretary of Veterans Affairs should direct VBA to: Collect and review feedback from staff on the training conducted at the if the 80-hour annual training requirement is appropriate for all VSRs and RVSRs; the extent to which regional offices provide training that is relevant to VSRs’ and RVSRs’ work, given varying levels of staff experience; and whether regional offices find the TPSS a useful learning tool and, if not, what adjustments are needed to make it more useful; and Use information from its new learning management system to hold individual VSRs and RVSRs accountable for completing whatever annual training requirement it determines is appropriate. The Secretary of Veterans Affairs should also examine the distribution of claims processing staff across overall performance categories to determine if its performance appraisal system clearly differentiates between overall performance levels, and if necessary adjust its system to ensure that it makes clear distinctions. We provided a draft of this report to the Secretary of Veterans Affairs for review and comment. In VA’s written comments (see app. IV), the agency agreed with our conclusions and concurred with our recommendations. For example, VBA plans to consult with regional office staff to evaluate its annual 80-hour training requirement and will examine if staff performance ratings clearly differentiate between overall performance levels. VA also provided technical comments that were incorporated as appropriate. We are sending copies of this report to the Secretary of Veterans Affairs, relevant congressional committees, and others who are interested. We will also provide copies to others on request. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me on (202) 512-7215 if you or your staff have any questions about this report. Contact points for the Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are listed in appendix V. We were asked to determine: (1) What training is provided to new and experienced claims processors and how uniform is this training? (2) To what extent has the Veterans Benefits Administration (VBA) developed a strategic approach to planning training for claims processors and how well is their training designed, implemented, and evaluated? And (3) To what extent is the performance management system for claims processors consistent with generally accepted performance management practices in the public sector? To answer these questions, we reviewed documents and data from the central office of the Department of Veterans Affairs’ Veterans Benefits Administration (VBA) and interviewed VBA central office officials. We conducted site visits to and collected data from four VBA regional offices, and visited the Veterans Benefits Academy. We also interviewed officials from the American Federation of Government Employees, the labor union that represents Veterans Service Representatives (VSR) and Rating Veterans Service Representatives (RVSR). We compared VBA’s training and performance management systems to accepted human capital principles and criteria compiled by GAO. We conducted this performance audit from September 2007 through May 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We conducted site visits to 4 of VBA’s 57 regional offices—Atlanta; Baltimore; Milwaukee; and Portland, Oregon. We judgmentally selected these offices to achieve some diversity in geographic location, number of staff, and claims processing accuracy rates, and what we report about these sites may not necessarily be representative of any other regional offices or all regional offices (see fig. 7). During our site visits, we interviewed regional office managers, supervisors of VSRs and RVSRs, VSRs, and RVSRs about the training and performance management practices in their offices. The VSRs and RVSRs we interviewed at the four regional offices had varying levels of experience at VBA. Regional office managers selected the staff we interviewed. We also observed a demonstration of VBA’s on-line learning tool, the Training and Performance Support System (TPSS), and collected data from the regional offices on, for example, the training they provided during fiscal year 2007. In conjunction with our visit to the Baltimore regional office, we also visited VBA’s Veterans Benefits Academy, where we observed classes for VSRs and RVSRs and interviewed the director of the Academy. To determine whether VBA’s training program is consistent with accepted training practices in the public sector, we relied partly on a guide developed by GAO that lays out principles that federal agencies should follow to ensure their training is effective. This guide was developed in collaboration with government officials and experts in the private sector, academia, and nonprofit organizations; and in conjunction with a review of laws, regulations and literature on training and development issues, including previous GAO reports. The guide lays out the four broad components of the training and development process (see fig. 8). The guide also provides key questions for federal agencies to consider in assessing their performance in each component. (See table 4 for a sample of these questions.) In addition, GAO training experts reviewed VBA materials, including training curricula, lesson plans, and course evaluation forms, to determine if these materials are consistent with accepted training practices. In assessing the performance management system for VSRs and RVSRs, we relied primarily on a set of accepted practices of effective public sector performance management systems that has been compiled by GAO. To identify these accepted practices, GAO reviewed its prior reports on performance management that drew on the experiences of public sector organizations both in the United States and abroad. For the purpose of this review, we focused on the six accepted practices most relevant for VBA’s claims-processing workforce (see table 5). Additional Issue Specific Lesson Plans are under development. (Lesson plans can be taken from the Centralized Training Curriculum found on the C&P Intranet Training Site. If used as provided they do not require C&P review and approval. These plans can and often should be modified to focus in on a particular narrow issue of training need. Modified lesson plans are to be submitted to C&P Service for review and approval at least 30 days prior to delivery of training. Any Challenge-oriented original lesson plan developed by Station personnel is to be submitted to C&P Service for review and approval at least 30 days prior to delivery of training.) C&P Service Broadcasts that may be provided during the course of the FY may be substituted in place of any training scheduled on an hour by hour basis. 60 Hours of the required 80 Hours will be selected from the suggested topics above. The remaining 20 hours will be selected at the Stations discretion based upon their own individual quality review. (Training provided from the above topics can be focused on a particular aspect of the topic; i.e. Cold Injuries and Rating Hypertension from Cardiovascular issues could be separate classes) Participation in Agency Advancement Programs (i.e., LEAD, LVA) does not substitute for Required training requirements. Additional Issue Specific Lesson Plans are under development. (Lesson plans can be taken from the Centralized Training Curriculum found on the C&P Intranet Training Site. If used as provided they do not require C&P review and approval. These plans can and often should be modified to focus in on a particular narrow issue of training need. Modified lesson plans are to be submitted to C&P Service for review and approval at least 30 days prior to delivery of training. Any Challenge-oriented original lesson plan developed by Station personnel is to be submitted to C&P Service for review and approval at least 30 days prior to delivery of training.) C&P Service Broadcasts that may be provided during the course of the FY may be substituted in place of any training scheduled on an hour by hour basis. Drill Pay Waivers Pension Awards Processing & BDN Hospital Reductions Burial Benefits Death Pension Accrued Benefits Accrued Awards & the BDN Apportionments Special Monthly Pension Helpless Child Incompetency/Fiduciary Arrangements Claims Processing Auto Allowance and Adaptive Equipment Special Adapted Housing Special Home Adaptation Grants Incarcerated Veterans Processing Write Outs FOIA/Privacy Act Telephone & Interview Techniques Telephone Development IRIS Introduction to VACOLS Education Benefits Insurance Benefits National Cemetery VR&E Benefits Loan Guaranty Benefits General Benefits – FAQs Suicidal Caller Guidance Non-Receipt of BDN Payments Mail Handling Income & Net Worth Determinations Bootcamp test and review of VSR Readiness Guide (2 HRS Required) Reference Material Training and Navigation (1 HR Required) Appeals and Ancillary Benefits Ready to Rate Development Customer Service FNOD Info and PMC Process Intro to Appeals Process DRO Selection Letter Income Adjustment Materials Income Adjustments 60 Hours of the required 80 Hours will be selected from the suggested topics above. The remaining 20 hours will be selected at the Stations discretion based upon their own individual quality review. Overview of VA Mission Reference Materials: Manual Training & WARMS C&P Website Claims Folder Maintenance Records Management POA/Service Organizations Compensation Original Compensation Claims Non-Original Compensation Claims VA Form 21-526, App. For Compensation or Pension Establishing Veteran Status Claims Recognition Duty to Assist Selecting the Correct Worksheet for VA Exams Issue Specific Claim Development Asbestos Claim Development Herbicide Claim Development POW Claim Development Radiation Claim Development PTSD Claim Development Undiagnosed Illness Claim Development Dependency Contested Claims Deemed Valid and Common-law Marriage Continuous Cohabitation Pension Intro. To Disability Pension Overview of SHARE (SSA) Administrative Decision Process Character of Discharge Line of Duty – Willful Misconduct Claims Development Workload Management Utilizing WIPP DEA Training (req. added 4/06) Intro to Ratings Paragraph 29 & 30 Ratings Ratings & the BDN BDN 301 Interface Video PCGL Award Letters PCGL Dependents & the BDN Compensation Offsets Drill Pay Waivers Star Reporter Pension Awards Processing & the BDN Hospital Reductions Burial Benefits Disallowance Processing DIC Benefits Death Pension Accrued Benefits Accrued Awards & the BDN Apportionment Special Monthly Pension Helpless Child Incompetency/Fiduciary Arrangements Claims Processing Automobile Allowance and Adaptive Equipment Specially Adapted Housing and Special Home Adaptation Grants Incarceration Processing Computer Write Outs DEA Training (req. added 4/06) Public Contact Team Training: FOIA/Privacy Act Communication Skills Telephone Development Inquiry Routing and Information System (IRIS) Intro to VACOLS Other VBA Business Lines Customer Service Insurance Education (2 hrs) Intro to Appeals Process VACOLS http://cptraining.vba.va.gov/ C&P_Training/vsr/VSR_ Curriculum.htm#att http://cptraining.vba.va.gov/ C&P_Training/vsr/VSR_ Curriculum.htm#iam. Each training plan we reviewed contained the same informational categories, some of which were what courses were offered by the regional office, whether or not the course was conducted, and how many employees completed the training. Although the fiscal year 2007 training plans we reviewed include data on whether and when the course was actually completed, the initial training plans submitted at the beginning of the fiscal year of course do not have this information. The lists provided below include the first 25 courses listed on each plan alphabetically, a small sample of the courses that the regional offices reported they completed for the fiscal year. Daniel Bertoni (202) 512-7215 bertonid@gao.gov. In addition to the contact named above, Clarita Mrena, Assistant Director; Lorin Obler, Analyst-in-Charge; Carolyn S. Blocker; and David Forgosh made major contributions to this report; Margaret Braley, Peter Del Toro, Chris Dionis, Janice Latimer, and Carol Willett provided guidance; Walter Vance assisted with study design; Charles Willson helped draft the report; and Roger Thomas provided legal advice. Veterans’ Benefits: Improved Management Would Enhance VA’s Pension Program. GAO-08-112. Washington, D.C.: February 14, 2008. Veterans’ Disability Benefits: Claims Processing Challenges Persist, while VA Continues to Take Steps to Address Them. GAO-08-473T. Washington, D.C.: February 14, 2008. Disabled Veterans’ Employment: Additional Planning, Monitoring, and Data Collection Efforts Would Improve Assistance. GAO-07-1020. Washington, D.C.: September 12, 2007. Veterans’ Benefits: Improvements Needed in the Reporting and Use of Data on the Accuracy of Disability Claims Decisions. GAO-03-1045. Washington, D.C.: September 30, 2003. Human Capital: A Guide for Assessing Strategic Training and Development Efforts in the Federal Government. GAO-03-893G. Washington, D.C.: July 2003. Results-Oriented Cultures: Creating a Clear Linkage between Individual Performance and Organizational Success. GAO-03-488. Washington D.C.: March 14, 2003. Major Management Challenges and Program Risks: Department of Veterans Affairs. GAO-03-110. Washington, D.C.: January 1, 2003. Veterans’ Benefits: Claims Processing Timeliness Performance Measures Could Be Improved. GAO-03-282. Washington, D.C.: December 19, 2002. Veterans’ Benefits: Quality Assurance for Disability Claims and Appeals Processing Can Be Further Improved. GAO-02-806. Washington, D.C.: August 16, 2002. Veterans’ Benefits: Training for Claims Processors Needs Evaluation. GAO-01-601. Washington, D.C.: May 31, 2001. Veterans Benefits Claims: Further Improvements Needed in Claims- Processing Accuracy. GAO/HEHS-99-35. Washington, D.C.: March 1, 1999.
Faced with an increase in disability claims, the Veterans Benefits Administration (VBA) is hiring a large number of new claims processing staff. We were asked to determine: (1) What training is provided to new and experienced claims processors and how uniform is this training? (2) To what extent has VBA planned this training strategically, and how well is the training designed, implemented, and evaluated? and (3) To what extent is the performance management system for claims processors consistent with generally accepted practices? To answer the questions, GAO reviewed documents including VBA policies and training curricula; interviewed VBA central office officials; visited 4 of VBA's 57 regional offices, which were selected to achieve diversity in geographic location, number of staff, and officewide accuracy in claims processing; and compared VBA's training and performance management to generally accepted practices identified by GAO. VBA has a standardized training curriculum for new staff and a training requirement for all staff, but does not hold staff accountable for meeting this requirement. The curriculum for new staff includes what is referred to as centralized training and training at their home offices. All claims processors must complete 80 hours of training annually, which may cover a mix of topics identified centrally and by regional offices. Individual staff members face no consequences for failing to meet the training requirement, however, and VBA has not tracked training completion by individuals. It is implementing a new system that should provide this capacity. Although VBA has taken steps to plan its training strategically, the agency does not adequately evaluate training and may be falling short in training design and implementation. VBA has a training board that assesses its overall training needs. However, the agency does not consistently collect feedback on regional office training, and both new and experienced staff GAO interviewed raised issues with their training. Some new staff raised concerns about the consistency of training provided by different instructors and about the usefulness of an on-line learning tool. Some experienced staff believe that 80 hours of training annually is not necessary, some training was not relevant for them, and workload pressures impede training. The performance management system for claims processors generally conforms to GAO-identified key practices, but the formula for assigning overall ratings may prevent managers from fully acknowledging and rewarding staff for higher levels of performance. The system aligns individual and organizational performance measures and requires that staff be given feedback throughout the year. However, VBA officials raised concerns about the formula used to assign overall ratings. Almost all staff in the offices GAO visited were placed in only two of five overall rating categories, although managers said greater differentiation would more accurately reflect actual performance differences. The Department of Veterans Affairs (VA) has not examined the ratings distribution, but acknowledges a potential issue with its formula and is considering changes.
Under TRICARE, beneficiaries may obtain health care through either the direct care system of military treatment facilities or the purchased care system of civilian providers and hospitals, including SCHs. SCHs were exempted from TRICARE’s reimbursement rules for hospitals until revised rules were implemented in January 2014. SCHs serve communities that rely on them for inpatient care, and they include hospitals and regional medical centers ranging in size from 9 to 598 beds. The intent of the SCH designation is to maintain access to needed health services for Medicare beneficiaries by providing financial assistance to hospitals that are geographically isolated. A hospital may generally qualify for SCH status by showing that because of factors such as isolated location, weather conditions, travel conditions, or absence of other like hospitals, it is the sole source of inpatient hospital services reasonably available in a geographic area. In 2014, 459 hospitals were designated as SCHs under the Medicare program. A hospital that qualifies as an SCH under the Centers for Medicare & Medicaid Services’s (CMS) Medicare regulations is also considered an SCH under TRICARE. Specifically, a hospital paid under the Medicare Acute Care Hospital IPPS is eligible for classification as an SCH if it meets one of the following criteria established by CMS: The hospital is at least 35 miles from other like hospitals; The hospital is rural, between 25 and 35 miles from other like hospitals, and meets one of the following criteria: No more than 25 percent of hospital inpatients or no more than 25 percent of the Medicare inpatients in the hospital’s service area are admitted to other like hospitals within a 35-mile radius of the hospital or, if larger, within its service area; The hospital has fewer than 50 beds and would meet the 25 percent criterion except that some beneficiaries or residents were forced to seek specialized care outside of the service area due to the unavailability of necessary specialty services at the hospital; or Because of local topography or periods of prolonged severe weather conditions, the other like hospitals are inaccessible for at least 30 days in each 2 out of 3 years. The hospital is rural and located between 15 and 25 miles from other like hospitals, but because of local topography or periods of prolonged severe weather conditions, the other like hospitals are inaccessible for at least 30 days in each of 2 out of 3 years; or The hospital is rural and because of distance, posted speed limits, and predictable weather conditions, the travel time between the hospital and the nearest like hospital is at least 45 minutes. Under the TRICARE program, beneficiaries can obtain care either from providers at military treatment facilities or from civilian providers. DHA contracts with three regional managed care support contractors to develop networks of civilian providers in their respective regions, including SCHs, to serve TRICARE beneficiaries in geographic areas called Prime Service Areas. Prime Service Areas are geographically defined by a set of 5-digit zip codes, usually within an approximate 40-mile radius of a military treatment facility. These civilian provider networks are required to meet specific access standards for certain types of TRICARE beneficiaries, such as travel times or wait times for appointments. However, these access standards do not apply to inpatient care. Since 1987, DHA has reimbursed hospitals for claims using the agency’s DRG-based payment system, which was modeled after Medicare’s system. Under this system, claims are priced using an annual standard amount and a weighted value for each DRG. For example, in fiscal year 2014, the TRICARE annual standard amount was approximately $5,500.00. Payment weights are assigned to each DRG based on the average resources used to treat patients. For example, in fiscal year 2014, a lung transplant had a weight of 8.6099, which would be multiplied by the annual standard payment amount ($5,500.00) for a reimbursement of $47,354.45. TRICARE’s DRG-based payment system differs from Medicare’s DRG-based payment system in that each program has different annual standard amounts and different DRG weights due to differences in the characteristics of their beneficiary populations. For example, Medicare’s population, which is generally older and less healthy than TRICARE’s population, may require more resources and may require longer inpatient lengths of stay. Also, some services, notably obstetric and pediatric services, are nearly absent from Medicare, but are a much larger component of TRICARE’s services. SCHs were exempted from DHA’s DRG-based payment system because they had special payment provisions under Medicare that allowed for payments based on historical costs as well as certain types of adjustments, such as additional payments for significant volume decreases defined as a more than 5 percent decrease in total inpatient discharges as compared to the immediately preceding cost reporting period. Instead, DHA generally reimbursed SCHs based on their billed charges for inpatient care provided to TRICARE beneficiaries. However, distinctions were made among providers based on network status. Specifically, nonnetwork SCHs were reimbursed for their billed charges, and network hospitals were reimbursed based on their billed charges less any discounts that they negotiated with the managed care support contractors. Under its revised reimbursement rules for SCHs, DHA’s methodology for TRICARE approximates the rules for Medicare for these hospitals. Specifically, both programs reimburse SCHs using the greater of either a cost-based amount or the allowed amount under a DRG-based payment system. However, each program takes a different approach in implementing these methods. Medicare reimburses each SCH based on which of the following methods yields the greatest aggregate payment for that hospital: (1) the updated hospital-specific rate based on cost per discharge from fiscal year 1982, (2) the updated hospital-specific rate based on cost per discharge from fiscal year 1987, (3) the updated hospital-specific rate based on cost per discharge from fiscal year 1996, (4) the updated hospital-specific rate based on cost per discharge from fiscal year 2006, or (5) the IPPS hospital-specific DRG rate payment. Medicare’s reimbursement rules also include payment adjustments that SCHs may receive under special programs or circumstances, such as adjustments to SCHs that experience significant volume decreases. Beginning January 1, 2014, TRICARE began reimbursing SCHs based upon the greater of (1) the SCH’s Medicare cost-to-charge ratio, or (2) TRICARE’s DRG-based payment system. The Medicare cost-to-charge ratio that TRICARE uses is calculated for each hospital by CMS and is distinct from the historical hospital-specific rates based on the cost per discharge that Medicare uses to reimburse SCHs. Under TRICARE’s revised rules for SCHs, the cost-to-charge ratio will be multiplied by each hospital’s billed charges to determine its reimbursement amount. Also, at the end of each year, DHA calculates the aggregate amount that each SCH would have been reimbursed under TRICARE’s DRG-based payment system, which it uses to reimburse other hospitals that provide inpatient care to TRICARE beneficiaries. If an SCH’s aggregate reimbursement would have been more under this system than it would have using the Medicare cost-to-charge ratio, DHA pays the SCH the difference. TRICARE’s revised reimbursement rules also include payment adjustments that SCHs may receive under special circumstances, although the specific TRICARE adjustments differ from those available under Medicare. For example, effective with the revised reimbursement rules, SCHs may qualify for a General Temporary Military Contingency Payment Adjustment if they meet certain criteria, including serving a disproportionate share of active duty servicemembers and their dependents—10 percent or more of the SCH’s total admissions. At the time of our review, DHA officials did not have an estimate of the number of SCHs that would qualify for this adjustment. Under TRICARE’s revised rules, some SCHs—which were previously reimbursed at up to 100 percent of their billed charges—will eventually be reimbursed at 30 to 50 percent of their billed charges. In order to minimize sudden significant reimbursement reductions on SCHs, DHA’s revised rules include a transition period to the new reimbursement levels for most SCHs. Eligible SCHs are reimbursed using an individually derived base-year ratio that is reduced annually until it matches the SCH’s Medicare cost-to-charge ratio that CMS has calculated for each hospital. For each hospital designated as an SCH during fiscal year 2012, DHA calculated a base-year ratio of their allowed-to-billed charges using fiscal year 2012 TRICARE claims data. Based on these calculations, each SCH fell into one of two categories: (1) SCHs with base-year ratios higher than their Medicare cost-to-charge ratios, or (2) SCHs with base- year ratios lower than, or equal to, their Medicare cost-to-charge ratios. Most SCHs fell into the first category with base-year ratios higher than their Medicare cost-to-charge ratios (339 or 74 percent), which qualified them for a transition period. For these SCHs, their base-year ratios are reduced annually based on their network participation status, and their modified ratios are multiplied by their billed charges beginning January 1, 2014. Specifically, a nonnetwork SCH has no more than a 15 percentage point reduction each year, while a network SCH has no more than a 10 percentage point reduction as its reimbursement level declines to its respective Medicare cost-to-charge ratio. The length of the transition period differs for each SCH and is determined by the difference between its base-year ratio and its Medicare cost-to-charge ratio, and its network status. Figure 1 shows an example of the transition for a network SCH with a 95 percent base-year ratio that is transitioning to a Medicare cost- to-charge ratio of 40 percent. As a network provider, the SCH’s base-year ratio would be reduced by 10 percentage points to 85 percent during the first year of implementation of the revised rules and would continue to be reduced until its reimbursement ratio matches the SCH’s Medicare ratio 5 years later. Twenty-four percent (111 of 459) of the hospitals that were designated as SCHs during fiscal year 2012 with base-year ratios less than or equal to their Medicare cost-to-charge ratios did not qualify for a transition period because either their reimbursement increased to their Medicare cost-to- charge ratio, or they continued to be reimbursed at their Medicare cost-to- charge ratio. Similarly, about 2 percent (9 of 459) of the hospitals that were not designated as SCH in fiscal year 2012 also did not qualify for a transition period. Instead, these SCHs are now reimbursed using their Medicare cost-to-charge ratio in accordance with TRICARE’s revised reimbursement rules. Once an SCH reaches its Medicare cost-to-charge ratio, TRICARE reimburses labor, delivery, and nursery care services at 130 percent of this ratio. This rule is based on DHA’s assessment that Medicare’s ratio does not accurately reflect the costs for these services. According to TRICARE’s fiscal year 2013 claims data, 120 SCHs (approximately 30 percent of all SCHs) were already reimbursed using rates that were at or below their Medicare cost-to-charge ratios. Because most SCHs have just completed the first year of a multi-year transition, it is too early to determine the full effect of the revised reimbursement rules on SCHs, including any effect on TRICARE beneficiaries’ ability to obtain care at these hospitals. Nonetheless, early indications show that TRICARE beneficiaries have not experienced problems accessing inpatient care at these facilities. For fiscal year 2013, we found that overall TRICARE reimbursements for SCHs averaged less than 1 percent of their net patient revenue, with TRICARE beneficiaries making up just over 1 percent of their total discharges. We also found that the majority of SCHs—58 percent (265 of 459)—had fewer than 20 TRICARE admissions during this time while 10 percent (44 of 459) had 100 or more TRICARE admissions. As a result, the impact of TRICARE’s revised reimbursement rules may likely be small for most SCHs. Figure 2 illustrates a breakdown of the 459 SCHs by their fiscal year 2013 TRICARE admissions. DHA officials reported that they do not think access to inpatient care at SCHs will be an issue because hospitals that participate in the Medicare program are required to participate in the TRICARE program and serve its beneficiaries. Officials from the 10 SCHs we identified as having the highest number of TRICARE admissions, the highest reimbursement amounts, or both, told us that they provide care to all patients, including TRICARE beneficiaries—although some of them were not familiar with this requirement. TRICARE reimbursement for these SCHs ranged from about 2 to 12 percent of their net patient revenue, and TRICARE beneficiaries accounted for about 1 to 27 percent of their total discharges. See table 1 for TRICARE percentages of net patient revenue and total discharges for each of these SCHs. However, TRICARE beneficiaries’ access to care at SCHs could be affected if these hospitals reduced or eliminated their inpatient services. The SCH officials we spoke with told us that they had not reduced the inpatient services available at their hospitals as a result of TRICARE’s revised reimbursement rules. However, officials at two SCHs did express concerns about future difficulties maintaining their current level of operations as they face further reductions in reimbursements not only from TRICARE, but also from other sources, such as Medicare and Medicaid. These officials said that they are concerned about their facilities’ long-term survival. Given the current environment of decreasing reimbursements, some SCHs we interviewed reported taking proactive steps, such as eliminating equipment maintenance contracts, to help offset the reimbursement reductions. Officials from one facility we interviewed told us that they are considering an option to partner with another SCH as a way to increase efficiency. TRICARE beneficiaries’ demand for inpatient care at SCHs also may be affected by the availability of inpatient care from their respective military installation. We found that 24 of the 44 SCHs we identified as having 100 or more TRICARE admissions in fiscal year 2013—about half—were within 40 miles of a military installation that only had an outpatient clinic. (See appendix II for a list of the 44 SCHs and their proximity to military hospitals and clinics). As a result, servicemembers and their dependents in those locations may be more reliant on a nearby SCH for their inpatient care. We found that TRICARE inpatient admissions for these 24 facilities ranged from 101 to 2,178 in fiscal year 2013, and 6 of them were among the 10 SCHs that we interviewed because they had the highest number of TRICARE admissions, the highest reimbursement amounts, or both. Officials from these 6 SCHs told us that nearby TRICARE beneficiaries tend to rely on their facilities for certain types of inpatient services, such as labor and delivery. See figure 3 for additional information about SCHs with 100 or more TRICARE admissions and their proximity to military hospitals and clinics. We also found that 12 of the 44 SCHs with 100 or more admissions were located fewer than 40 miles from a military hospital. TRICARE admissions for these facilities ranged from 117 to 2,364 in fiscal year 2013. Three of these SCHs—which are located near Naval hospitals in North Carolina and South Carolina—were among the 10 SCHs with the highest number of TRICARE admissions or the highest numbers of both admissions and reimbursement that we interviewed. An official with Naval Hospital Camp Lejeune (North Carolina) told us the hospital relies on local SCHs because it is either unable to meet their beneficiaries’ demand for certain services, such as obstetric care, or because the SCHs offer services not available at the Naval hospital, such as some cardiac care. Naval Hospital Beaufort (South Carolina) provides limited inpatient services, and according to an official there, most of that hospital’s beneficiaries obtain inpatient care at the local SCH, including intensive care, all pediatric care, maternity and newborn care, and certain types of specialty care not provided at the Naval hospital (neurology, cardiology, and gastroenterology). We also interviewed officials at two additional military hospitals—Naval Hospital Lemoore (California) and Naval Hospital Oak Harbor (Washington)—that had eliminated all or most of their inpatient care and were within 40 miles of an SCH. These officials told us that they rely more on other hospitals that are closer to their installations than the SCHs. For example, an official with Naval Hospital Lemoore told us that Lemoore currently has a resource sharing agreement with another hospital, which is closer to them than the nearby SCH. This agreement allows military providers with privileges to deliver babies for TRICARE beneficiaries at that facility. Officials from Naval Hospital Oak Harbor told us that their hospital tends to utilize three smaller facilities closer to it than the SCH depending on the type of service needed. DHA and managed care support contractor officials told us that they have not heard of concerns or issues with beneficiary access at SCHs resulting from the revised reimbursement rules. DHA officials reported that they do not think access to inpatient care at SCHs will be an issue because hospitals that participate in the Medicare program are required to participate in the TRICARE program and serve its beneficiaries. DHA officials told us they track access issues pertaining to inpatient care at SCHs through concerns or complaints communicated to them through the TRICARE Regional Offices or directly from beneficiaries. As of February 2015, these officials told us they have not received any such complaints. They noted that they are looking at ways to measure changes in access to care at these facilities, possibly by comparing the number of discharges from one year to the next. Although their plans are under development, officials stated that they will likely focus on the 44 SCHs that had 100 or more TRICARE admissions. Officials from DHA’s TRICARE Regional Offices and the managed care support contractors also told us that they have not received complaints or heard of issues from beneficiaries about their ability to access inpatient care at SCHs. In addition, officials from national health care associations and military beneficiary coalition groups that we spoke with also reported that they have not heard any concerns about access to care at SCHs resulting from TRICARE’s revised reimbursement rules. We provided a draft of this report to DOD for comment. DOD responded that it agreed with the report’s findings, and its comments are reprinted in appendix III. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Defense and appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or draperd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IV. We obtained TRICARE claims data on the number of admissions and reimbursement amounts for each sole community hospital (SCH) for fiscal year 2013. We used these data to select the eight SCHs with the highest number of TRICARE admissions and the eight SCHs with the highest reimbursement amounts. Due to overlap, the number of unique SCHs we selected totaled 10. We interviewed officials at those hospitals about the change in TRICARE reimbursement rules and any resulting effect on access to care. Sole community hospital (SCH) 8 SCHs that were 40 miles or more from a military outpatient clinic or hospital Fiscal Year (FY) 2013 TRICARE admissions 40 miles from a military outpatient clinic(s) military hospital(s) Sole community hospital (SCH) 12 SCHs that were less than 40 miles from a military hospital or a hospital and an outpatient clinic Fiscal Year (FY) 2013 TRICARE admissions outpatient clinic(s) military hospital(s) Debra A. Draper, Director, (202) 512-7114 or draperd@gao.gov. In addition to the contact name above, Bonnie Anderson, Assistant Director; Jennie Apter; Jackie Hamilton; Natalie Herzog; Giselle Hicks; Sylvia Diaz Jones; and Eric Wedum made key contributions to this report.
DOD offered health care to about 9.6 million eligible beneficiaries through TRICARE, which provides care through military treatment facilities and civilian providers. Because DOD determined that its approach for reimbursing SCHs (459 in 2014) based on their billed charges was inconsistent with TRICARE's governing statute to reimburse civilian providers in a manner similar to Medicare, it implemented revised rules in January 2014. House Report 113-446, which accompanied the National Defense Authorization Act for Fiscal Year 2015, included a provision for GAO to review issues related to the changes in TRICARE's reimbursement rules for SCHs. In this report, GAO examines (1) how TRICARE's revised reimbursement rules for SCHs compare to Medicare's reimbursement rules for these hospitals, and (2) the extent to which TRICARE's revised reimbursement rules for SCHs may have affected access to these facilities by servicemembers and their dependents. GAO reviewed federal laws and regulations as well as TRICARE and Medicare's rules for reimbursing SCHs. GAO analyzed fiscal year 2013 TRICARE claims data on SCH admissions and reimbursement amounts, and Medicare data on SCH net patient revenue and total discharges. GAO interviewed 10 SCHs with the highest number of TRICARE admissions or reimbursement amounts about access issues. GAO also interviewed officials from DOD and national health care associations and military beneficiary coalition groups. TRICARE's revised reimbursement rules for Sole Community Hospitals (SCHs), which provide health care in rural areas or where similar hospitals do not exist under certain criteria, approximate those for Medicare's. Specifically, both programs reimburse SCHs using the greater of either a cost-based amount or the allowed amount under a diagnostic-related-group-based payment system, although each program takes a different approach in implementing these methods. Each program also provides for reimbursement adjustments under specific circumstances. In order to minimize sudden significant reductions in SCHs' TRICARE reimbursements, the revised rules include a transition period during which an eligible SCH is reimbursed using a cost-based ratio that is reduced annually until it matches the SCH's Medicare cost-to-charge ratio, which is calculated by the Centers for Medicare & Medicaid Services for each hospital. Under TRICARE's revised rules for SCHs, this cost-to-charge ratio will be multiplied by the hospitals' billed charges to determine their reimbursement amounts. Most SCHs—about 74 percent—qualified for a transition to their Medicare cost-to-charge ratios. Because most SCHs have just completed the first year of a multi-year transition, it is too early to determine the full effect of the revised reimbursement rules, including any impact on TRICARE beneficiaries' access to care at these hospitals. Nonetheless, early indications show that TRICARE beneficiaries have not experienced problems accessing inpatient care at these facilities. Specifically, Defense Health Agency (DHA) officials reported that they do not think access to inpatient care at SCHs will be an issue because hospitals that participate in the Medicare program are required to participate in the TRICARE program and serve its beneficiaries. Although some of them were not familiar with this requirement, officials from the 10 SCHs GAO interviewed with the highest number of TRICARE admissions, the highest reimbursement amounts, or both, stated that they provide care to all patients, including TRICARE beneficiaries. DHA officials also said that they track access issues pertaining to inpatient care at SCHs through concerns or complaints, and as of February 2015, they had not received any access complaints. They noted that they are still looking at ways to measure changes in access to care at these facilities and will likely focus on the 44 SCHs that had 100 or more TRICARE admissions. In addition, other stakeholders, including representatives of national health care associations and military beneficiary coalition groups, said that they are not aware of TRICARE beneficiaries having difficulty accessing care at SCHs. Moreover, in its analysis of available Medicare data for these facilities (427 of 459 SCHs), GAO found that overall TRICARE reimbursements for SCHs averaged less than 1 percent of SCHs' net patient revenue, with TRICARE beneficiaries making up just over 1 percent of their total discharges for fiscal year 2013. As a result, the impact of TRICARE's revised reimbursement rules may likely be small for most SCHs. GAO provided a draft of this report to the Department of Defense (DOD) for comment. DOD responded that it agreed with the report's findings and provided technical comments, which we incorporated as appropriate.
Although VA has been authorized to collect third-party health insurance payments since 1986, it was not allowed to use these funds to supplement its medical care appropriations until enactment of the Balanced Budget Act of 1997. Part of VA’s 1997 strategic plan was to increase health insurance payments and other collections to help fund an increased health care workload. The potential for increased workload occurred in part because the Veterans’ Health Care Eligibility Reform Act of 1996 authorized VA to provide certain medical care services not previously available to higher-income veterans or those without service-connected disabilities. VA expected that the majority of the costs of their care would be covered by collections from third-party payments, copayments, and deductibles. These veterans increased from about 4 percent of all veterans treated in fiscal year 1996 to about a quarter of VA’s total patient workload in fiscal year 2002. VA can bill insurers for treatment of conditions that are not a result of injuries or illnesses incurred or aggravated during military service. However, VA cannot bill them for health care conditions that result from military service, nor is it generally authorized to collect from Medicare or Medicaid, or from health maintenance organizations when VA is not a participating provider. To collect from health insurers, VA uses five related processes to manage the information needed to bill and collect. The patient intake process involves gathering insurance information and verifying that information with the insurer. The medical documentation process involves properly documenting the health care provided to patients by physicians and other health care providers. The coding process involves assigning correct codes for the diagnoses and medical procedures based on the documentation. Next, the billing process creates and sends bills to insurers based on the insurance and coding information. Finally, the accounts receivable process includes processing payments from insurers and following up with insurers on outstanding or denied bills. In September 1999, VA adopted a fee schedule, called “reasonable charges.” Reasonable charges are itemized fees based on diagnoses and procedures. This schedule allows VA to more accurately bill for the care provided. However, by making these changes, VA created additional bill- processing demands—particularly in the areas of documenting care, coding that care, and processing bills per episode of care. First, VA must accurately assign medical diagnoses and procedure codes to set appropriate charges, a task that requires coders to search through medical documentation and various databases to identify all billable care. Second, VA must be prepared to provide an insurer supporting medical documentation for the itemized charges. Third, in contrast to a single bill for all the services provided during an episode of care under the previous fee schedule, under reasonable charges VA must prepare a separate bill for each provider involved in the care and an additional bill if a hospital facility charge applies. For fiscal year 2002, VA collected $687 million in insurance payments, up 32 percent compared to the $521 million collected during fiscal year 2001. Collections through the first half of fiscal year 2003 total $386 million in third-party payments. The increased collections in fiscal year 2002 reflected that VA processed a higher volume of bills than it did in the prior fiscal year. VA processed and received payments for over 50 percent more bills in fiscal year 2002 than in fiscal year 2001. VA’s collections grew at a lower percentage rate than the number of paid bills because the average payment per paid bill dropped 18 percent compared to the prior fiscal year. Average payments dropped primarily because a rising proportion of VA’s paid bills were for outpatient care rather than inpatient care. Since the charges for outpatient care were much lower on average, the payment amounts were typically lower as well. Although VA anticipated that the shift to reasonable charges in 1999 would yield higher collections, collections had dropped in fiscal year 2000. VA attributed that drop to its being unprepared to bill under reasonable charges, particularly because of its lack of proficiency in developing medical documentation and coding to appropriately support a bill. As a result, VA reported that many VA medical centers developed billing backlogs after initially suspending billing for some care. As shown in figure 1, VA’s third-party collections increased in fiscal year 2001—reversing fiscal year 2000’s drop in collections—and increased again in fiscal year 2002. After initially being unprepared in fiscal year 2000 to bill reasonable charges, VA began improving its implementation of the processes necessary to bill and increase its collections. By the end of fiscal year 2001, VA had submitted 37 percent more bills to insurers than in fiscal year 2000. VA submitted even more in fiscal year 2002, as over 8 million bills—a 54 percent increase over the number in fiscal year 2001—were submitted to insurers. Managers we spoke with in three networks—Network 2 (Albany), Network 9 (Nashville), and Network 22 (Long Beach)—mainly attributed the increased billings to reductions in the billing backlogs. Networks 2 (Albany) and 9 (Nashville) reduced backlogs, in part by hiring more staff, contracting for staff, or using overtime to process bills and accounts receivable. Network 2 (Albany), for instance, managed an increased billing volume through mandatory overtime. Managers we interviewed in all three networks noted better medical documentation provided by physicians to support billing. In Network 22 (Long Beach) and Network 9 (Nashville), revenue managers reported that coders were getting better at identifying all professional services that can be billed under reasonable charges. In addition, the revenue manager in Network 2 (Albany) said that billers’ productivity had risen from 700 to 2,500 bills per month over a 3-year period, as a result of gradually increasing the network’s productivity standards and streamlining their jobs to focus solely on billing. VA officials cited other reasons for the increased number of bills submitted to insurers. An increased number of patients with billable insurance was one reason for the increased billing. In addition, a May 2001 change in the reasonable-charges fee schedule for medical evaluations allowed separate bills for facility charges and professional service charges, a change that contributed to the higher volume of bills in fiscal year 2002. Studies have suggested that operational problems—missed billing opportunities, billing backlogs, and inadequate pursuit of accounts receivable—limited VA’s collections in the years following the implementation of reasonable charges. For example, a study completed last year estimated that 23.8 percent of VA patients in fiscal year 2001 had billable care, but VA actually billed for the care of only 18.3 percent of patients. This finding suggests that VA could have billed for 30 percent more patients than it actually billed. Further, after examining activities in fiscal years 2000 and 2001, a VA Inspector General report estimated that VA could have collected over $500 million more than it did. About 73 percent of this uncollected amount was attributed to a backlog of unbilled medical care; most of the rest was attributed to insufficient pursuit of delinquent bills. Another study, examining only professional-service charges in a single network, estimated that $4.1 million out of $4.7 million of potential collections was unbilled for fiscal year 2001. Of that unbilled amount, 63 percent was estimated to be unbillable primarily because of insufficient documentation. In addition, the study found that coders often missed services that should have been coded for billing. According to a CBO official, VA could increase collections by working on operational problems. These problems included unpaid accounts receivable and missed billing opportunities due to insufficient identification of insured patients, inadequate documentation to support billing, and coding problems that result in unidentified care. From April through June 2002, three network revenue managers told us about backlogs and processing issues that persisted into fiscal year 2002. For example, although Network 9 (Nashville) had above average increases in collections for both inpatient and outpatient care, it still had coding backlogs in four of six medical centers. According to Network 9’s (Nashville) revenue manager, eliminating the backlogs for outpatient care would increase collections by an estimated $4 million, or 9 percent, for fiscal year 2002. Additional increases might come from coding all inpatient professional services, but the revenue manager did not have an estimate because the extent to which coders are capturing all billable services was unknown. Moreover, although all three networks reported that physicians’ documentation for billing was improving, they also reported a continuing need to improve physicians’ documentation. In addition, Network 22 (Long Beach) reported that its accounts receivable staff had difficulties keeping up with the increased volume of bills because it had not hired additional staff members or contracted help on accounts receivable. As a result of these operational limitations, VA lacks a reliable estimate of uncollected dollars, and therefore does not have the basis to assess its systemwide operational effectiveness. For example, some uncollected dollars result from billing backlogs and billable care missed in coding. In addition, VA does not know the net impact of actual third-party collections on supplementing its annual appropriation for medical care. For example, CBO relies on reported cost data from central office and field staff directly involved in billing and collection functions. However, these costs do not include all costs incurred by VA in the generation of revenue. According to a CBO official, VA does not include in its collections cost the investments it has made in information technology or resources used in the identification of other health insurance during the enrollment process. VA continues to implement its 2001 Improvement Plan, which is designed to increase collections by improving and standardizing VA’s collections processes. The plan’s 24 actions are to address known operational problems affecting revenue performance. These problems include unidentified insurance for some patients, insufficient documentation for billing, coding staff shortages, gaps in the automated capture of billing data, and insufficient pursuit of accounts receivable. The plan also addresses uneven performance across collection sites. The plan seeks increased collections through standardization of policy and processes in the context of decentralized management, in which VA’s 21 network directors and their respective medical center directors have responsibility for the collections process. Since management is decentralized, collections procedures can vary across sites. For example, sites’ procedures can specify a different number of days waited until first contacting insurers about unpaid bills and can vary on whether to contact by letter, telephone, or both. The plan intends to create greater process standardization, in part, by requiring certain collections processes, such as the use of electronic medical records by all networks to provide coders better access to documentation and legible records. When fully implemented, the plan’s actions are intended to improve collections by reducing operational problems, such as missed billing opportunities. For example, two of the plan’s actions—requiring patient contacts to gather insurance information prior to scheduled appointments and electronically linking VA to major insurers to identify patients’ insurance—are intended to increase VA’s awareness of its patients who have other health insurance. VA has implemented some of the improvement plan’s 24 actions, which were scheduled for completion at various times through 2003, but is behind the plan’s original schedule. The plan had scheduled 15 of the 24 actions for completion through May 25, 2002, but as of that date VA had only completed 8 of the actions. Information obtained from CBO in April 2003 indicates that 10 are complete and 7 are scheduled for implementation by the end of 2003. Implementation of the remaining actions will begin in 2004 as part of a financial system pilot with full implementation expected in 2005 or 2006. (Appendix I lists the actions and those VA reports as completed through April 28, 2003.) In May 2002, VHA established its CBO to underscore the importance of revenue, patient eligibility, and enrollment and to give strategic focus to improving these functions. Officials in the office told us that they have developed a new approach for improving third-party collections that can help increase revenue collections by further revising processes and providing a new business focus on collections. For example, the CBO’s strategy incorporates improvements to the electronic transmission of bills and initiation of a system to receive and process third-party payments electronically. CBO’s new approach also encompasses initiatives beyond the improvement plan, such as the one in the Under Secretary for Health’s May 2002 memorandum that directed all facilities to refer accounts receivable older than 60 days to a collection agency, unless a facility can document a better in-house process. According to the Deputy Chief Business Officer, the use of collection agencies has shown some signs of success—with outstanding accounts receivables dropping from $1,378 million to $1,317 million from the end of May to the end of July 2002, a reduction of about $61 million or 4 percent. CBO is in the process of acquiring a standardized Patient Financial Services System (PFSS) that could be shared across VA. VA’s goal with PFSS is to implement a commercial off-the-shelf health care billing and accounts receivable software system. Under PFSS, a unique record will be established for each veteran. Patient information will be standardized— including veteran insurance data, which will be collected, managed, and verified. Receipts of health care products and services will be added to the patient records as they are provided or dispensed. And PFSS will automatically extract needed data for billing, with the majority of billings sent to payers without manual intervention. After the system is acquired, VA will conduct a demonstration project in Network 10 (Cincinnati). According to the Deputy Chief Business Officer, in May 2003 VA anticipates awarding a contract for the development and implementation of PFSS. CBO’s plan is to install this automated financial system in other facilities and networks if it is successfully implemented in the pilot site. CBO is taking action on a number of other initiatives to improve collections, including the following: Planning and developing software upgrades to facilitate the health care service review process and electronically receive and respond to requests from insurers for additional documentation. Establishing the Health Revenue Center to centralize preregistration, insurance identification and verification, and accounts receivable activities. For example, during a preregistration pilot in Network 11 (Ann Arbor), the Health Revenue Center made over 246,000 preregistration telephone calls to patients to verify their insurance information. According to VA, over 23,000 insurance policies were identified, resulting in $4.8 million in collections. Assessing its performance based on private sector performance metrics, including measuring the pace of collections relative to the amount of accounts receivable. As VA faces increased demand for medical care, particularly from higher- income veterans, third-party collections for nonservice-connected conditions remain an important source of revenue to supplement VA’s appropriations. VA has been improving its billing and collecting under a reasonable-charges fee schedule it established in 1999, but VA has not completed its efforts to address problems in collections operations. In this regard, fully implementing the 2001 Improvement Plan could help VA maximize future collections by addressing problems such as missed billing opportunities. CBO’s initiatives could further enhance collections by identifying root causes of problems in collections operations, providing a focused approach to addressing the root causes, establishing performance measures, and holding responsible parties accountable for achieving the performance standards. Our work and VA’s continuing initiatives to improve collections indicate that VA has not collected all third-party payments to which it is entitled. In this regard, it is important that VA develop a reliable estimate of uncollected dollars. VA also does not have a complete measure of its full collections costs. Consequently, VA cannot determine how effectively it supplements its medical care appropriation with third-party collections. Mr. Chairman, this concludes my prepared remarks. I will be pleased to answer any questions you or other members of the subcommittee may have. For further information regarding this testimony, please contact Cynthia A. Bascetta at (202) 512-7101. Michael T. Blair, Jr. and Michael Tropauer also contributed to this statement. Certain actions are mandated in the plan, that is, are required, but these actions are not legal or regulatory mandates. One action item was cancelled but its intended improvements will be incorporated into an automated financial system initiative. VA designated the electronic billing project, shown here as “17a,” as completed. However, this indicated only partial completion of action 17, which includes an additional project. VA Health Care: Third Party Collections Rising as VA Continues to Address Problems in Its Collections Operations. GAO-03-145. Washington, D.C.: January 31, 2003. VA Health Care: VA Has Not Sufficiently Explored Alternatives for Optimizing Third-Party Collections. GAO-01-1157T. Washington, D.C.: September 20, 2001. VA Health Care: Third-Party Charges Based on Sound Methodology; Implementation Challenges Remain. GAO/HEHS-99-124. Washington, D.C.: June 11, 1999. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Veterans Affairs (VA) collects health insurance payments, known as third-party collections, for veterans' health care conditions it treats that are not a result of injuries or illnesses incurred or aggravated during military service. In September 1999, VA adopted a new fee schedule, called "reasonable charges," that it anticipated would increase revenues from third-party collections. In January 2003, GAO reported on VA's third-party collection efforts and problems in collections operations for fiscal year 2002 as well as VA's initiatives to improve collections (VA Health Care: Third-Party Collections Rising as VA Continues to Address Problems in Its Collections Operations, ( GAO-03-145 , Jan. 31, 2003)). GAO was asked to discuss its findings and update third-party collection amounts and agency plans to improve collections. VA's fiscal year 2002 third-party collections rose by 32 percent over fiscal year 2001 collections, to $687 million, and available data for the first half of fiscal year 2003 show that $386 million has been collected so far. The increase in collections reflects VA's improved ability to manage the larger billing volume and more itemized bills required under its new fee schedule. VA managers in three regional health care networks attributed billings increases to a reduction of billing backlogs and improved collections processes, such as better medical documentation prepared by physicians, more complete identification of billable care by coders, and more bills prepared per biller. Although collections are increasing, operational problems, such as missed billing opportunities, persist and continue to limit the amount VA collects. VA has been implementing the action items in its Revenue Cycle Improvement Plan of September 2001 that are designed to address operational problems, such as unidentified insurance for some patients, insufficient documentation of services for billing, shortages of billing staff, and insufficient pursuit of accounts receivable. VA reported in April 2003 that 10 of 24 action items are complete; 7 are scheduled for implementation by the end of 2003; and the remaining actions will begin in 2004 with full implementation expected in 2005 or 2006. These dates are behind VA's original schedule. In addition, the Chief Business Office, established in May 2002, has developed a new approach that combines the action items with additional initiatives. Given the growing demand for care, especially from higher-income veterans, it is important that VA resolve its operational problems and sustain its commitment to maximizing third-party collections. It is also important for VA to develop a reliable estimate of uncollected dollars and a complete measure of its collections costs. Without this information, VA cannot evaluate its effectiveness in supplementing its medical care appropriation with third-party dollars.
Under the TVA Act of 1933 (TVA Act), as amended, TVA is not subject to most of the regulatory and oversight requirements that commercial electric utilities must satisfy. The Act vests all authority to run and operate TVA in its three-member board of directors. Legislation also limits competition between TVA and other utilities. The TVA Act was amended in 1959 to establish what is commonly referred to as the TVA “fence,” which prohibits TVA, with some exceptions, from entering into contracts to sell power outside the service area that TVA and its distributors were serving on July 1, 1957. In addition, the Energy Policy Act of 1992 (EPAct) provides TVA with certain protections from competition, called the “anti-cherry picking” provisions. Under EPAct, TVA is exempt from having to allow other utilities to use its transmission lines to transmit (“wheel”) power to customers within TVA’s service area. This legislative framework generally insulates TVA from direct wholesale competition. As a result, TVA remains in a position similar to that of a regulated utility monopoly. EPAct’s requirement that utilities use their transmission lines to transmit wholesale electricity for other utilities has enabled wholesale customers to obtain electricity from a variety of competing suppliers, thus increasing wholesale competition in the electric utility industry across the United States. In addition, restructuring efforts in many states have created competition at the retail level. If, as expected, retail restructuring continues to occur on a state-by-state basis over the next several years, then industrial, commercial, and, ultimately, residential consumers will be able to purchase their power from one of several competitors rather than from one utility monopoly. Since EPAct exempts TVA from having to transmit power from other utilities to customers within its territory, TVA has not been directly affected by the ongoing restructuring of the electric utility industry to the same extent as other utilities. However, if the Congress were to eliminate TVA’s exemption from the wheeling provision of EPAct, its customers would have the option of purchasing their power from other sources after their contracts with TVA expire. Under the Clinton administration’s proposal in April 1999 to promote retail competition in the electric power industry, which TVA supported, TVA’s exemption from the wheeling provision of EPAct would have been eliminated after January 1, 2003. If this or a similar proposal is enacted, TVA may be required to use its transmission lines to transmit the power of other utilities for consumption within its service territory. A balancing factor is that recent proposals would have also removed the statutory restrictions that prevent TVA from selling wholesale power outside its service territory. Because of these ongoing restructuring efforts, TVA management, like many industry experts, expects that in the future TVA may lose its legislative protections from competition. TVA’s management recognized the need to act to better position TVA to compete in an era of increasing competition and, in July 1997, issued a 10-year business plan with that goal in mind. TVA established a 10-year horizon because a majority of its long- term contracts with distributors could begin expiring at that time, and TVA could be facing greater competitive pressures by 2007. The plan contained three strategic objectives: reduce TVA’s cost of power in order to be in a position to offer competitively priced power by 2007, increase financial flexibility by reducing fixed costs, and build customer allegiance. To help meet the first two strategic objectives noted above, one of the key goals of TVA’s 10-year plan was to reduce debt from its 1997 levels by about one-half, to about $13.2 billion. In addition, while not specifically discussed in the published plan, TVA planned to reduce the balance (i.e., recover the costs through rates) of its deferred assets from about $8.5 billion to $500 million, which TVA estimated to be the net realizable value of its deferred nuclear units. TVA planned to generate cash that could be used to reduce debt by increasing rates beginning in 1998, reducing expenses, and limiting capital expenditures; these actions would increase its financial flexibility and future competitiveness. TVA’s plan to reduce debt and recover the costs of deferred assets while it is still legislatively protected from competition was intended to help position TVA to achieve its ultimate goal of offering competitively priced power by 2007. In a competitive market, if TVA’s power were priced above market because of high debt service costs and the recovery through rates of the costs of its deferred assets, it would be in danger of losing customers. Losing customers could result in stranded costs if TVA is unable to sell the capacity released by the departing customers to other customers for at least the same price. Stranded costs, as discussed later, are costs that are uneconomical to recover in a competitive environment due to regulatory changes. For each of the three objectives addressed in this report, you asked us to answer specific questions. Regarding debt and deferred assets, you asked us to determine what progress TVA has made in achieving the goals of its 10-year business plan for reducing debt and deferred assets, and to what extent TVA has used the additional revenues generated from its 1998 rate increase to reduce debt and deferred assets. Regarding TVA’s financial condition, you asked us to compare TVA’s financial condition, including debt and fixed cost ratios, to neighboring investor-owned utilities (IOUs). Finally, regarding stranded costs, you asked us to (1) explain the link between TVA’s debt and its potential stranded costs, (2) determine whether TVA has calculated potential stranded costs for any of its distributors, and if so, determine the methodology it used, and (3) determine the options for recovering any potential stranded costs at TVA. We evaluated the progress TVA has made in achieving the debt reduction and recovery of deferred assets goals of its 10-year plan, and determined the extent to which TVA is using revenue from its 1998 rate increase to reduce debt and recover the cost of its deferred assets, by interviewing TVA and Congressional Budget Office (CBO) officials; reviewing and analyzing various TVA reports and documents, including annual reports, audited financial statements, the original 10-year business plan and proposed revisions to it; and reviewing supporting documentation (analytical spreadsheets, etc.) and assumptions underlying TVA’s 10-year plan. To determine TVA’s financial condition, we analyzed TVA’s debt and fixed costs, and then compared TVA to its likely competitors. To accomplish this, we obtained financial data for TVA and its likely competitors from their audited financial statements; computed and compared key financial ratios for TVA and its likely competitors; analyzed data on the future market price of power; interviewed TVA officials about their efforts to position themselves competitively, including their efforts to reduce debt, recover the cost of their deferred assets, and mitigate and/or recover stranded costs; and reviewed IOU annual reports to determine what steps the IOUs are taking to financially position themselves for competition. To assess TVA’s potential stranded costs, we interviewed industry experts at the Federal Energy Regulatory Commission (FERC), Edison Electric Institute (EEI), and CBO on the options other utilities have pursued to recover stranded costs; reviewed Energy Information Administration (EIA) documents on stranded cost recovery at the state level; questioned TVA officials on TVA’s plans for calculating and recovering potential stranded costs; and analyzed TVA’s contracts to determine whether TVA has contractually relieved its customers of any obligation to pay for any stranded costs. Also, to determine the link between TVA’s debt and its potential stranded costs, we analyzed the interrelationship between debt reduction and stranded cost mitigation. Additional information on our scope and methodology is in appendix I. We conducted our review from April 2000 through January 2001 in accordance with generally accepted government auditing standards. To the extent practical, we used audited financial statement data in performing our analyses, or reconciled the data we used to audited financial statements; however, we were not able to do so in all cases and we did not verify the accuracy of all the data we obtained and used in our analyses. In addition, we based information on debt reduction, deferred asset recovery, and the future market price of power on TVA’s planned revisions to its key goals and assumptions at the time of our review. We requested written comments from TVA on a draft of this report. TVA provided both technical comments, which we have incorporated, as appropriate and written comments, which are reproduced in appendix III. In April 1999, we reported that capital expenditures not accounted for in the 1997 plan would negatively impact TVA’s ability to achieve its plans to reduce debt and recover the cost of deferred assets by 2007. At that time, TVA’s fiscal year 2000 federal budget request acknowledged that TVA would not achieve its goal of reducing outstanding debt by about half until 2009, 2 years later than originally planned. TVA’s goal in its original plan was to reduce debt to about $13.2 billion. Since April 1999, TVA has fallen further behind in meeting its debt reduction goal. TVA now has a target of reducing debt to $19.6 billion by 2007; it no longer is projecting a target for debt reduction beyond 2007. For fiscal years 1998 through 2000, TVA reduced its debt by about $1.4 billion. However, TVA’s debt reduction shortfall also totaled about $1.4 billion, which resulted from greater than anticipated capital expenditures and annual operating and other expenses and lower revenues than projected in 1997. These same factors will hamper TVA’s debt reduction efforts over the last 7 years of the plan. In addition, although TVA reduced deferred assets to the extent planned for the first 3 years of the plan, it is revising the amount of deferred assets it plans to recover through 2007 downward. TVA now plans to reduce the balance of its deferred assets to about $3.9 billion by September 30, 2007, compared to its original goal of $500 million. To achieve the overall debt reduction goal in the original 10-year plan, TVA established annual debt reduction goals. In the 1997 plan, the annual debt reduction goals ranged from $476 million in 1998 to $2 billion in 2007. TVA has made progress in reducing debt, and in fact, exceeded its target goal in the first year of the plan. However, TVA fell far short in the second and third years. Through the first 3 years of the 10-year plan, TVA reduced debt by about $1.4 billion, but its debt reduction shortfall also totaled about $1.4 billion. In addition, TVA is now planning to issue a revised plan that would significantly reduce the goals for 2001 through 2007. Figure 1 compares the annual debt reduction goals contained in TVA’s July 1997 10-year plan to TVA’s actual debt reduction for fiscal years 1998 through 2000 and to TVA’s proposed revisions to its annual debt reduction goals for fiscal years 2001 through 2007. In its presidential budget submission for fiscal year 2000, TVA acknowledged that it would not achieve its goal of reducing debt by about one-half by 2007. Instead, TVA said it would not meet the debt reduction goal until 2009, 2 years later than the goal in its original 10-year plan. TVA is in the process of revising its goal for reducing outstanding debt again. TVA officials now estimate that its outstanding debt by September 30, 2007, will be between $18 billion and $24 billion, with a target of about $19.6 billion, or about $6.4 billion higher than TVA envisioned when it issued the 1997 plan. TVA is not projecting a target reduction goal beyond 2007. Figure 2 compares the annual outstanding debt goals contained in TVA’s July 1997 10-year plan to TVA’s actual debt outstanding for fiscal years 1998 through 2000 and to TVA’s proposed revisions to annual goals for fiscal years 2001 through 2007. TVA officials attribute the $1.4 billion debt reduction shortfall over the first 3 years to four factors. The first factor is greater than anticipated cash expenditures for new generating capacity. For fiscal years 1998 through 2000, TVA spent $436 million more than planned to purchase new peaking generator units. The 1997 plan assumed that TVA would meet future increases in demand for power by purchasing power from other utilities, which would have used less cash through 2007 than purchasing the peaking units. TVA officials believe that its capital expenditures for new generating capacity will have two positive effects. First, they believe the new generating capacity will ultimately reduce TVA’s cost of power, even though the increased capital expenditures will use cash that could have been used to reduce debt. Second, they believe the new generating capacity will enhance system reliability by providing a dependable source of power. The second factor to which TVA officials attribute the debt reduction shortfall over the first 3 years of the plan is greater than anticipated capital expenditures requiring cash for environmental controls to meet Clean Air Act requirements. For fiscal years 1998 through 2000, TVA spent $276 million more than planned on environmental controls. Meanwhile, over the 3-year period, TVA spent about $221 million less than planned on other types of capital items. The net effect of increased spending on new generating capacity and environmental controls and decreased spending on other types of capital items is that TVA’s capital expenditures have exceeded the planned amount. TVA had forecast about $1.7 billion in capital expenditures over that 3-year period; its actual capital expenditures were almost $500 million more (about $2.2 billion). Under current plans, TVA expects its major capital costs for new generating capacity and environmental controls to be completed by 2004. Figure 3 compares the annual capital expenditure goals contained in TVA’s July 1997 10-year plan to TVA’s actual capital expenditures for fiscal years 1998 through 2000 and to TVA’s proposed revisions to annual goals for fiscal years 2001 through 2007. The third factor to which TVA officials attribute the debt reduction shortfall over the first 3 years of the plan is a net increase in annual expenses requiring cash that could have been used for debt reduction. For fiscal years 1998 through 2000, TVA’s operating and maintenance expenses, and sales, general, and administrative expenses were greater than anticipated. This increase in annual expenses was partially offset by a reduction in fuel and purchased power expense and interest expense. The net effect was that annual expenses totaled about $122 million more than planned. The fourth factor to which TVA officials attribute the debt reduction shortfall over the first 3 years of the plan is less revenue than originally anticipated. According to TVA officials, the revenue shortfall was caused primarily by mild winters that lessened demand for electricity. The revenue shortfall for fiscal years 1998 through 2000 totaled about $725 million. Our analysis confirms that the above four factors were the primary ones that hampered TVA’s debt reduction efforts for fiscal years 1998 through 2000. These factors are also projected to limit TVA’s ability to reduce debt in fiscal years 2001 through 2007. Over this 7-year period, the primary factors limiting TVA’s debt reduction efforts are that annual revenue is expected to be lower, and capital expenditures and cash expenses are expected to be higher. This reduces the amount of cash that would have been available to repay debt. TVA now anticipates that its revenue will be about $2.2 billion lower, and its capital expenditures and cash expenses—at about $1.6 billion and $2.5 billion, respectively—will be higher than planned in 1997. Table 1 shows our analysis of the factors affecting cash available to reduce debt from 1998 through 2007. In developing its 10-year plan, TVA planned to use the additional revenue from its 1998 rate increase to reduce its debt. TVA officials attribute about an additional $1.24 billion in revenue over the first 3 years of the plan to the rate increase. During this period, TVA has reduced its outstanding debt by more than a comparable amount—about $1.4 billion. A key element of TVA’s plan was not only to reduce the cost of its power by reducing its debt and the corresponding interest expense, but also to recover a substantial portion of the costs of its deferred assets. By increasing operating revenues and reducing interest and other expenses to generate cash flow that could be used to reduce debt, TVA would have the opportunity to use revenues in excess of expenses to recover a portion of the costs of its deferred assets. However, as noted previously, the proposed revision to the plan contains additional operating and other expenses over the remainder of the 10-year period, which, absent any future rate increases, will decrease the amount of revenue available to recover deferred assets. TVA has also added about $600 million in deferred assets, some of which will have to be recovered in the future. Although TVA recovered the costs of deferred assets to the extent planned over the first 3 years of the plan, it is reducing its overall deferred asset recovery goals through 2007. TVA has a significant amount of unrecovered capital costs associated with three uncompleted and nonproducing deferred nuclear units—about $6.3 billion as of September 30, 2000. At that time, TVA’s investment in its deferred nuclear units represented about 26 percent of the cost of TVA’s total undepreciated generating property, plant, and equipment. The deferred units do not generate power, and TVA has chosen not to begin to recover their costs through rates. In contrast, the unrecovered costs of TVA’s operating nuclear plants, which produced about 31 percent of TVA’s power in 2000, represented about 45 percent of the cost of TVA’s total undepreciated generating assets as of September 30, 2000. At the time TVA issued the original 10-year business plan, the unrecovered balance of TVA’s deferred assets, including both its nuclear units and other deferred assets, was about $8.5 billion. TVA recovered the cost of deferred assets to the extent planned for over the first 3 years of the plan. Through September 30, 2000, $1.1 billion in other deferred assets had been recovered through rates, but recovery of the cost of the deferred nuclear units had not begun. However, since the original plan was issued, TVA has also added about $600 million in other deferred assets, some of which will have to be recovered in the future; its current total is about $8 billion. TVA’s overall plan for recovering the costs of its deferred assets through 2007 is being reduced significantly. TVA now plans to reduce the balance of its deferred assets, including both its nuclear units and other deferred assets, to about $3.9 billion; this represents much less deferred asset recovery than TVA’s original estimate of $500 million. Figure 4 compares the annual estimated remaining balances of deferred assets (both the deferred nuclear units and other deferred assets) contained in TVA’s July 1997 10-year plan to TVA’s actual deferred asset balances as of the end of fiscal years 1998 through 2000 and to TVA’s estimated balances for fiscal years 2001 through 2007. Not reducing debt and recovering deferred assets to the extent planned by 2007 while still legislatively protected from competition could diminish TVA’s future competitive prospects. Specifically, not meeting these goals could cause the price of its future power to be above market, if TVA’s debt service costs remain relatively high at the time it is required to compete and if TVA is at the same time attempting to recover the costs of its deferred assets through rates. Assuming that TVA’s outstanding debt balance is $19.6 billion as of September 30, 2007, and its weighted average interest rate remains about 6.5 percent, we estimate that TVA’s interest expense in the year 2008 will be about $1.27 billion, about $416 million higher than if debt were reduced to $13.2 billion. As we stated in our April 1999 report, the more progress TVA makes in addressing these issues while it has legislative protections, the greater its prospects for being competitive if it loses those protections in the future. Although reducing debt and the amount of deferred asset costs that have not yet been recovered are important to TVA as it prepares for competition, TVA’s future competitiveness will be based to a large degree on market conditions and how TVA will be restructured if and when TVA loses its legislative protections. Of particular importance is the uncertainty of the future market price of power. In our 1999 assessment of TVA’s 10-year plan, we found that TVA’s projection of the future market price of wholesale power in 2007 was somewhat lower than the projections of leading industry experts. This lower projection prompted TVA to be aggressive in its planning to reduce costs to position itself to offer competitively priced power by 2007. TVA and other industry experts are continuing to revise their projections of the future market price of power in 2007. TVA’s projection is a load-shaped forecast—i.e., its projection is based specifically on how TVA’s load varies during different hours of the day and different days of the week. TVA officials told us that higher projections are warranted now than when it prepared its plan in 1997 primarily due to projected increases in the price of natural gas, but also due to a combination of other factors, including the extreme volatility of spot prices (in the summer months), increasing power demands beyond what they expected 3 years ago, shortages (or at least, shrinking surpluses) of both generating and transmission capacity, and a better understanding of the increased costs of complying with environmental regulations that are likely to take effect between now and 2007. TVA has stated that the impact of these factors can be seen in higher current trading prices, higher forward prices being offered by suppliers, higher long-term contract prices, and higher energy prices. TVA officials are now forecasting a market price of power in 2007 in the range of 4.0 to 5.0 cents per kilowatthour (kWh), which would be sufficient to cover its projected costs of about 3.8 to 3.9 cents per kWh in 2007. An analysis by Salomon Smith Barney, which extends through 2004, supports TVA’s position that market indicators suggest that the future market price of power will be higher during this part of the plan period. Not all industry experts agree with TVA’s belief that the price of natural gas will necessarily drive electricity prices higher. For example, the Energy Information Administration (EIA) projects a downward price trend (in current dollars) between now and 2007 in the region in which TVA operates, in part due to declining coal prices that EIA projects would more than offset increasing gas prices. EIA also projects that nuclear fuel prices will remain stable. However, when projecting future prices by geographic region, EIA and other industry experts generally forecast the future market price of power on an average yearly price that includes all peaks and valleys. Such average yearly price forecasts are not directly comparable to TVA’s load-shaped forecast. Differing forecasts by various industry experts underscore the uncertainty of predicting the future market price of power. The higher actual market prices are, the better positioned TVA will be to generate revenue that could be used to pay down debt and recover costs, including the costs of deferred assets. However, by increasing its projections for the future market price of power, TVA assumes it can accommodate a higher debt level than originally planned. Because of the uncertainly surrounding whether TVA’s projections of higher market prices in 2007 are accurate, TVA’s higher debt projections increase the risk that it will not be able to generate the revenue needed to recover all costs or offer competitively priced power at that time. In a competitive environment, these assumptions could increase the federal government’s risk of loss due to its financial involvement with TVA. A key objective of TVA’s 1997 plan was to alter its cost structure from a rigid, high fixed-to-variable cost relationship to a cost structure with more financial flexibility that is better able to adjust to a more volatile marketplace. However, while TVA has made positive steps, its financial flexibility remains below that of likely competitors, largely because its debt remains relatively high. Another key objective of TVA’s 1997 plan was to reduce its cost of power. One of the components of the cost of power is the recovery of the costs of its capital assets. Similar to improvements in flexibility, while TVA has made some progress in recovering the costs of its capital assets, financial indicators show that TVA has recovered fewer of these costs than its likely competitors. In 1995 we reported that one option available for TVA to improve its financial condition was to raise rates while it is still legislatively protected from competition and use the proceeds to reduce its debt. In 1998, TVA implemented its first rate increase in 10 years. For the previous 10 years, TVA had chosen to keep rates as low as possible rather than generate additional revenue that could have been used to reduce debt. Revenue from TVA’s 1998 rate increase has reduced debt (and corresponding interest expense) and recovered some of the costs of deferred assets over the first 3 years of its 10-year plan. From September 30, 1997, through September 30, 2000, TVA reduced its debt from about $27.4 billion to about $26.0 billion. This debt reduction, along with the refinancing of debt at lower interest rates, enabled TVA to reduce its annual interest expense from about $2.0 billion in fiscal year 1997 to about $1.7 billion in fiscal year 2000. In addition, TVA has recovered about $1.1 billion of its deferred assets through rates. While not reducing debt and recovering the costs of deferred assets to the extent anticipated in its original plan, these actions are important because they are a step toward giving TVA more financial flexibility to adjust its rates in a competitive environment. To assess the progress TVA has made in achieving its key objective of altering its cost structure from a rigid, high fixed-to-variable cost relationship to a cost structure with more financial flexibility, and to put TVA’s financial condition in perspective, we compared TVA to likely competitors in terms of (1) total financing costs, (2) fixed financing costs, and (3) net cash generated from operations as a percentage of expenditures for property, plant, and equipment and common stock dividends. These ratios are indicators of TVA’s flexibility to withstand competitive or financial challenges. To assess TVA’s financing costs compared to these competitors, we computed the total financing costs to revenue ratio, which is the percentage of an entity’s operating revenue that is needed to cover all of its financing costs. A lower percentage indicates greater flexibility to respond to financial or competitive challenges. Financing costs for TVA, which consist of the interest expense on its outstanding debt and payments made to the federal government as returns on past appropriations, are fixed costs in the short term that must be paid even in times of financial or competitive difficulty. In contrast, for the IOUs, financing costs include preferred and common stock dividends in addition to interest expense, because part of the IOUs’ capital is derived from preferred and common stock and dividends represent the cost of this equity capital. Figure 5 shows that TVA’s total financing costs, although improved since 1994, remain high when compared to those of likely competitors. Next, we computed the fixed financing costs to revenue ratio, which indicates the percentage of operating revenues needed to cover the fixed portion of the financing costs. For this ratio, we excluded the common stock dividends paid by IOUs because these are not contractual obligations that must be paid. They can be reduced—or even suspended in extreme cases—to allow an entity to respond to financial or competitive challenges. As with the total financing costs to revenue ratio, the lower the percentage of the fixed financing costs to revenue, the greater the financial flexibility of the entity. Figure 6 shows that, while TVA has made progress since 1994, its fixed financing costs remain high compared to those of likely competitors. For example, for fiscal year 1999, 28 cents of every revenue dollar earned by TVA went to pay for fixed financing costs compared to about 9 cents on average for its likely competitors. Another key indicator of financial flexibility is the ratio of net cash from operations (i.e., cash in excess of operating and interest expenses) to expenditures for property, plant, and equipment (PP&E) and common stock dividends. This net cash in effect represents the amount available for management’s discretionary use. A percentage of 100 would indicate sufficient net cash provided by operations to pay for 100 percent of annual PP&E expenditures and common stock dividends. By necessity, utilities that are unable to pay for capital expenditures from net cash are forced to pay for them through retained earnings or by borrowing funds or issuing stock. Issuing debt to cover capital expenditures increases a utility’s cost of power by requiring annual interest payments, and issuing stock could also increase the cost of power through the payment of dividends. Since TVA does not pay dividends, its ratio only includes expenditures for PP&E. A higher percentage indicates greater flexibility. Because of increased revenue from TVA’s recent rate increase, a significant reduction in annual capital expenditures for its nuclear power program, and cost control measures that reduced certain expenses, TVA’s ratio has improved significantly and now compares favorably to those of likely competitors. Figure 7 illustrates the improvement TVA has made to date compared to likely competitors. Electricity providers, including TVA, generally recover their capital costs once the capital assets have been placed in service by spreading these costs over future periods for recovery through rates. This way, customers “pay” for the capital assets over time as the assets provide benefits. When a decision is made not to complete a capital asset, it becomes “abandoned.” Accounting standards require that abandoned assets be classified as regulatory assets and amortized into operating expense; therefore, they would be included in rates over time. Thus, even though abandoned assets are nonproductive, the costs may still be recovered. TVA’s three uncompleted deferred nuclear power units have not been classified as abandoned, even though no construction work has been done in the last 12 to 15 years. In 1995 and 1997, we reported that TVA should classify them as regulatory assets and begin to recover the costs immediately. However, TVA continues to assert that there is a possibility the units will be completed in the future and has not classified them as regulatory assets and begun to recover their costs. As of September 30, 2000, the deferred cost of the three uncompleted nuclear generating units was about $6.3 billion. If TVA is required to compete with other electricity providers, depending on the market price of power and TVA’s cost of providing power, recovery of these deferred assets could be difficult. Effective for 1999, TVA began emphasizing the accelerated recovery of certain of its other deferred assets in its planning and adopted accounting policies that would enable it to recover more of these costs earlier. However, as the following analysis indicates, TVA’s continued deferral of the $6.3 billion related to the three nuclear units would hinder its ability to compete in a restructured environment, if TVA tries to recover the costs through rates. This would increase the risk of loss to the federal government from its financial involvement in TVA. The extent to which the costs of deferred capital assets have not been recovered by TVA compared to its likely competitors can be shown by two analyses. The first analysis compares the amount of capital assets that have not yet begun to be taken into rates to gross PP&E. For TVA, this consists of construction work in progress (CWIP) and the costs of the deferred nuclear units; for the other entities this consists of CWIP only. A lower ratio indicates fewer capital costs to be recovered through future rates, and therefore more flexibility to adjust rates to meet future competition. TVA’s ratio improved—dropping by more than half—when it brought two nuclear plants on line in 1996 and began to recover their costs. However, as figure 8 shows, the portion of TVA’s capital assets that has not yet begun to be taken into rates remains significantly higher than that of likely competitors. This is due largely to the deferral of TVA’s three uncompleted nuclear units. For example, about 19 percent of the total cost of TVA’s PP&E as of September 30, 1999, was not in rates, while the highest percentage for TVA’s likely competitors was only 10 percent. A second way to analyze the extent to which capital costs have been recovered through rates is to compare accumulated depreciation/amortization to gross PP&E. A higher ratio indicates that a greater percentage of the cost of PP&E has been recovered through rates. A utility that has already recovered a greater portion of its capital costs could be in a better financial condition going into an increasingly competitive environment because it would not have to include those costs in future rates. TVA has also made progress in this area since 1994, as have, in general, its likely competitors. However, figure 9 shows that as of September 30, 1999, TVA had recovered a substantially smaller portion of its capital costs than most of its likely competitors, again, largely due to the deferred nuclear units. When considering its financing costs and unrecovered deferred assets, TVA’s financial condition compares unfavorably to its likely competitors. Although TVA’s ratio of net cash from operations to expenditures for PP&E and common stock dividends is better than its likely competitors, this advantage is negated by TVA’s relatively higher financing costs, including fixed financing costs, and relatively higher deferred asset costs. These factors reduce TVA’s financial flexibility to respond to future financial or competitive pressures, a key objective of TVA’s 10-year plan. Bond analysts with experience rating TVA’s bonds confirmed our assessment by stating that if forced to compete today, TVA’s financial condition would pose a serious challenge. The analysts further stated that their Aaa rating of TVA bonds is based on TVA’s ties to the federal government and the belief that any restructuring legislation would give TVA sufficient time to prepare for competition. According to the analysts, their bond rating of TVA was not based on the same financial criteria applied to the other entities rated. When assessing the progress TVA has made in achieving the key objectives of its 1997 plan, TVA’s financial condition remains unfavorable compared to its likely competitors in the current environment. However, TVA also has certain competitive advantages. Specifically, it remains its own regulator; is not subject to antitrust laws and regulations; enjoys a high bond rating, and associated lower interest costs, based on its ties to the federal government; is a government entity that is not required to generate the level of net income that would be needed by a private corporation to provide an expected rate of return; is not required to pay federal and state income taxes and various local taxes, but is required to make payments in lieu of taxes to state and local governments equal to 5 percent of gross revenue from sales of power in areas where its power operations are conducted; in addition, TVA’s distributors are also required to pay various state and local taxes; and has relatively more low-cost hydroelectric power than neighboring utilities. Although TVA enjoys these competitive advantages, its high debt and unrecovered costs would present challenges in a competitive environment. However, it is not possible to predict TVA’s future competitive position. In addition to uncertainties over the future market price of power, TVA’s future competitive position will be affected by a number of issues, including the specific requirements of any legislation that might remove TVA’s legislative protections, including whether it would be able to retain some or all of the competitive advantages described previously; actions being taken by TVA to prepare for competition in relation to those being taken by TVA’s competitors; the amount of time before TVA might lose its protections from competition and is required to compete with other utilities—the longer TVA is legislatively protected from competition, the longer it will have to reduce its debt and related financing costs and recover deferred costs through rates; the extent to which TVA would write off all or a portion of the cost of its deferred nuclear units to retained earnings should it go from a regulated to a restructured, competitive environment. To the extent retained earnings is sufficient to cover the cost of the write-offs, any costs written off directly to retained earnings would not have to be recovered through future rates; and total cost of delivering power in relation to likely competitors, generation capacity and mix, transmission capability, and geographic location. Stranded costs can generally be defined as costs that become uneconomical to recover through rates when a utility moves from a regulated to a competitive environment. Stranded costs arise in competitive markets as a result of uneconomic assets, the costs of which are not recoverable at market rates. There are two commonly used methods for calculating stranded costs, and various mechanisms have been used to recover them in the states that have restructured their electricity markets. TVA’s potential for stranded costs arises mainly from its uneconomic assets—primarily its three nonproducing nuclear units with unrecovered costs totaling about $6.3 billion—and the fixed costs associated with its high debt. The mechanism(s) that would be available to TVA to recover stranded costs would determine which customer group would pay for them. Stranded costs occur when a utility moves from a regulated to a competitive environment and is unable to recover certain costs because the market price of power will not allow it to generate revenue at a level sufficient to recover these costs. Such costs result from past decisions that were considered prudent when they were made, and the costs would have been recoverable in a cost-based, regulated environment. However, in a competitive environment, recovery of these costs would force a utility’s price above market, and it consequently could not recover them by charging market-based rates. As discussed below and in appendix II, states that have restructured their electricity markets have addressed the issue of mitigating and recovering potential stranded costs in various ways. Stranded costs can be the result of, among other things: investment in generation assets that may not be able to produce competitively priced power in a restructured environment, even though the investments were considered prudent at the time they were made; power purchase contracts made in anticipation of future needs that would become uneconomical should market prices for power in a competitive market become lower; regulatory assets, such as deferred income taxes that regulators would have eventually allowed utilities to collect but may not be recoverable in a competitive market; future decommissioning costs for nuclear facilities; and social programs where public utility commissions mandated spending for programs such as demand side management−such costs would typically be capitalized and amortized in a regulated environment, but, since the costs are not part of generating power, the market price for electricity under competition may not allow recovery of them. Two methods are commonly used to calculate the amount of allowable stranded costs—the FERC “revenues lost” methodology and the “asset-by- asset approach.” FERC has jurisdiction over stranded cost recovery related to wholesale power sales and power transmission and uses the revenues lost method in determining allowable stranded costs for these activities. If legislation is enacted providing for TVA to compete in a restructured environment, TVA would likely fall under FERC jurisdiction for stranded cost recovery for its wholesale customers. TVA’s wholesale sales to its 158 distributors were about $6 billion in fiscal year 2000, or about 88 percent of TVA’s total operating revenues. Under the FERC methodology, whether a utility’s plants are nonproducing or productive is immaterial to the stranded cost calculation, as long as the costs associated with the plants are included in rates at the time a customer departs TVA’s system. According to FERC officials, stranded cost recovery assumes the costs are already in the rate base; if not, FERC officials told us they would likely not consider them in a stranded cost recovery claim. The three deferred nuclear units, with costs of about $6.3 billion as of September 30, 2000, that TVA has not yet begun recovering, are a primary reason for TVA’s potential exposure to stranded costs. However, TVA’s projections through 2007, using its current power rates, show that by the end of 2007 the costs will have been reduced to about $3.8 billion. Depending on the timing of any restructuring legislation affecting TVA and assuming that FERC would have jurisdiction over TVA, it is unclear whether FERC would consider these costs to be in TVA’s rate base and, thus, allow TVA to include some or all of these costs in a stranded cost recovery claim. In the past when TVA calculated its stranded costs, it used the FERC “revenues lost” methodology. When the 4-County Electric Power Association (near Columbus, Mississippi) and the city of Bristol, Virginia, threatened to find other sources of power, TVA used the FERC methodology to calculate stranded costs, and TVA officials told us that they would continue to use the FERC methodology to calculate stranded costs in the future. TVA’s calculations of stranded costs for the 4-County Electric Power Association ranged from $57 million to $133 million. The 4-County Electric Power Association ultimately decided not to leave the TVA system and therefore no stranded costs were assessed. In contrast, Bristol did leave the TVA system. TVA again calculated stranded costs using the FERC methodology and initially attempted to assess Bristol for $54 million for stranded costs. However, TVA and the city of Bristol ultimately negotiated a settlement that included an agreement under which Bristol would not be assessed for stranded costs, but would purchase transmission and certain ancillary services from TVA. According to a FERC official, under the revenues lost method, when a customer leaves a utility’s system, stranded costs are calculated by first taking the revenue stream that the utility could have expected to recover if the customer had not left, then subtracting the competitive market value of the electricity capacity released by the departing customer (that the utility will sell to other customers), then multiplying the result by the length of time the utility could have reasonably expected to continue to serve the departing customer. Figure 10 illustrates TVA’s potential application of the FERC methodology. The second commonly used method to calculate stranded costs is the “asset-by-asset” or “bottoms up” approach. This method has been used by the states when they restructure their retail markets. In this method, the market value of a utility’s generating assets is determined and compared to the amount at which those assets are currently recorded on the utility’s books (book value). The difference would be reflected on the income statement as a gain or loss and recorded in the retained earnings of the organization. If the total book value of a utility’s generating assets exceeds their total market value, the utility would have stranded costs equal to the difference between book and market values. Because TVA is a unique self-regulator that crosses state borders and is not currently subject to FERC regulation, it is unclear what entity would have jurisdiction over any stranded cost recovery at the retail level. Sales to TVA’s direct service industrial customers and other nondistributors, which we consider retail sales because they are sales to final users, were about $0.7 billion in fiscal year 2000, or about 10.4 percent of TVA’s total operating revenues. In the states that have restructured their electricity markets, there have been five commonly used mechanisms to recover stranded costs. Depending on the approval of state regulators, utilities have the following options; the choice of option affects which customer group pays. Exit fees − fees charged to departing customers, either via a lump sum or over a set number of years. Competitive transition charge (or competitive transition assessment) − either (1) a one time charge applied to all customers at the time the state initiates restructuring, or (2) charges based on kilowatthour (kWh) usage, usually charged to remaining customers over a set number of years. Wires charge (also called transmission surcharge) − a predetermined surcharge that is not based on kWh usage, which is added to remaining customers’ power bills during a set period of time; sometimes considered a subset of competitive transition charges. Rate freeze or cap − regulators set a cap on the total amount a utility can charge; however, under the cap, the regulator would allow the utility to recover stranded costs by charging higher prices for the two components of the market that are still regulated (distribution and transmission). The cap is usually frozen for the estimated length of time needed to recover the stranded costs. Remaining customers bear the burden. Write off to retained earnings − In the case where a utility moves from a regulated to a competitive environment and has assets whose book value is in excess of market value, it would mark its assets to market value, and recognize any excess of book value over market value as a loss on the income statement, which would flow through to retained earnings. Retained earnings represent cumulative net profit from past operations that can be used to benefit either stockholders or current and future customers, by keeping profits in the company for future use. In addition, the change to a competitive environment, with overvalued assets, could result in stranded costs. However, the legislation that caused the change to a competitive environment could give utilities the option of recovering the amount of overvalued assets over time, rather than charging all the cost to retained earnings immediately. Writing off the costs of the overvalued assets to retained earnings immediately would mitigate potential stranded costs and eliminate the need to recover the cost of these assets from future ratepayers, making a utility’s power rates potentially more competitive. TVA continues to operate similar to a regulated monopoly because of its legislative protections from competition. Since regulatory changes requiring TVA to compete with other electricity providers have not been made, TVA does not currently have stranded costs. However, as discussed previously, TVA has uneconomic assets—primarily its three nonproducing nuclear units with unrecovered costs totaling about $6.3 billion that do not generate revenue. In 1998, TVA estimated the net realizable value of these assets to be about $500 million. TVA has not made a final decision on whether to abandon these three units or complete them and place them into service. If it abandons them, under current accounting standards, This action would require approval of TVA’s Board. If its retained earnings are not sufficient to cover any losses arising from revaluation of these units, TVA could find itself with stranded costs if legislation were enacted that would require TVA to compete with other electricity providers before it completes these units and brings them into operation. TVA’s ability to recover costs that could ultimately become stranded is compounded by TVA’s high debt and corresponding financing costs. FASB 90 would apply because TVA remains in a regulated environment. restructuring legislation would have required these contracts to be renegotiated; however, it is possible that this clause will remain in effect. Thus, if TVA enters a competitive environment with stranded costs, it may be unable to collect them from certain departing customers after 2007, and the burden for recovering these costs may fall on remaining customers or retained earnings from prior customers. According to TVA officials, if TVA were unable to collect any stranded costs from departing customers under its contracts, remaining customers would bear the burden of stranded cost recovery. To the extent stranded cost recovery is spread among remaining customers, it would become more difficult for TVA to price its power competitively. A key element of TVA’s 10-year business plan is to reduce its cost of power. TVA planned to accomplish this by reducing expenses, limiting capital expenditures, and instituting a rate increase in 1998 to increase the cash flow available to pay down debt. Reducing debt, in turn, reduces the corresponding annual interest expense. By reducing interest expense, TVA frees up cash that can be used to further reduce debt. In addition, these actions increase the portion of revenue that would be available to recover the costs of its deferred assets. To the extent that TVA reduces costs, it will be able to offer more competitively priced power and its distributors will be less likely to leave TVA’s system for alternate suppliers. At the wholesale level, under current FERC rules, if its distributors do not leave, TVA does not have the option of recovering stranded costs. If its distributors decide to leave, TVA would have potential stranded costs if TVA is either unable to sell the power released by the departing distributor or is forced to sell the power that would have been purchased by the departing distributor for lower rates. Figure 11 illustrates the link between debt reduction and stranded costs. This circular relationship is key to understanding how TVA’s 10-year plan links to potential stranded costs. In its original 10-year plan, a key element of TVA’s plan was to reduce its cost of power by cutting its debt in half by September 30, 2007. By reducing debt, TVA would also reduce future interest expense, which would free up additional cash that could be used to further reduce debt. However, not explained in the published plan was how the revenue generated from its 1998 rate increase would give TVA the opportunity to recover the cost of its deferred assets. By increasing revenue and reducing expenses, TVA would free up revenue that could be used to recover the cost of its deferred assets and cash that could be used to pay down debt. As discussed previously, TVA estimates the additional revenue from the rate increase over the first 3 years of the plan to be about $1.24 billion. TVA had the option to use that revenue for any authorized purpose, such as adding any excess revenue to retained earnings, accelerating depreciation, or amortizing its deferred assets, including writing down its deferred nuclear units. TVA planned to first amortize some of its other deferred assets before writing down its deferred nuclear units. To accomplish this, TVA’s Board of Directors approved a resolution to begin accelerating amortization of these other deferred assets. This means that in any given year in which TVA has revenue sufficient to meet all of its legal requirements to recover all costs and comply with all laws and regulations regarding revenue levels, any excess revenue can be used to accelerate the write-down of a portion of the costs of its deferred assets; this would result in TVA recovering these costs over time. In relation to its deferred nuclear units, TVA’s original plan was to recover all but $500 million of these $6.3 billion costs by September 30, 2007, at which time TVA officials believed it could be subject to a competitive environment through legislative changes and expiring customer contracts. Its proposed revision to the 10-year plan now calls for a balance of about $3.8 billion by 2007, or about $3.3 billion more than originally planned. To the extent TVA recovers the costs of the deferred nuclear units before such time as the Congress may remove its legislative protections, it would no longer have to recover these costs through future rates, potentially making its power more competitive, and giving it more flexibility to operate in a competitive environment. And, as noted above, if TVA is able to offer competitively priced power by 2007, its distributors would be less likely to leave and TVA would be less likely to have stranded costs. If TVA were to lose its legislative protections today, its high level of debt and corresponding high financing costs would be a competitive challenge. This competitive challenge would be even greater if it were at the same time attempting to recover costs of deferred assets through rates. Despite having reduced its debt and deferred assets over the past 3 years, TVA still compares unfavorably to its likely competitors in these regards. In addition, TVA is revising its goals for reducing debt and deferred assets downward significantly. Whether or not the deferred assets will contribute to stranded costs that are recoverable from customers depends on the specific requirements of any legislation that might remove TVA’s legislative protections and TVA’s ability to retain its current competitive advantages in a restructured environment. In addition, the longer that TVA has to prepare for competition, the longer it will have to reduce debt and recover the costs of its deferred assets and position itself more competitively. Ultimately, TVA’s ability to be competitive will depend on the future market price of power, which cannot be predicted with any certainty. TVA, in a letter from its Chief Financial Officer, disagreed with our findings in three areas—the future market price of electricity, TVA’s financial condition compared to other utilities, and the relationship between TVA’s deferred assets and potential stranded costs. TVA’s comments are reproduced in appendix III and discussed below. In addition, TVA officials provided technical comments on the draft report, which we have incorporated as appropriate. TVA also took the opportunity to comment, in a section called “Looking Back,” on progress it has made since issuing its 10-year plan in 1997, including reducing debt and recovering the costs of certain deferred assets, and its goals and strategies for the future. We discuss these comments at the end of this section. Market Prices for Electricity TVA agreed that the future market price of electricity is a key factor in assessing the likelihood of success in a competitive environment and that the price cannot be predicted with any certainty, but disagreed on the general direction of prices. TVA and its consultants are projecting higher future market prices. As evidence of projected increases in market prices, TVA cites higher trading prices, higher “forward” prices offered by suppliers, higher long-term contract price offerings, and higher prices for fuel sources such as natural gas. Our report discusses TVA’s views in this regard; however, we underscore the uncertainty of projections of the future price of power by citing a knowledgeable source that projects lower prices. In the draft we provided to TVA for comment, we included point estimates from various sources for the future market price of power. Considering TVA’s comments, we agree that point estimates imply more certainty about future prices than we intended or is warranted. As a result, we revised our report by removing those estimates. However, to underscore the uncertainty of future market prices, we have included the Energy Information Administration’s (EIA) projection of a downward trend in the future market price of power in the region in which TVA operates. EIA’s analysis was based in part on a projected decline in coal prices that, according to EIA, would more than offset projected increases in gas prices. EIA is also projecting that nuclear fuel prices will remain steady. We believe these are relevant points to consider since, in the year 2000, TVA’s power generation fuel mix was about 63 percent coal, 31 percent nuclear, 6 percent hydropower (which has no fuel cost), and less than 1 percent natural gas. Our main point is that the future market price of power “cannot be predicted with any certainty.” TVA cites prices for electricity and natural gas for December 2000 as an example of market direction and volatility to support their projection of future higher prices. We agree that the market has shown volatility at certain times. In fact, this volatility strengthens our view that future prices are uncertain. In addition, according to data from the National Oceanic and Atmospheric Administration, in the entire region in which TVA markets power, December 2000 was one of the 10 coolest periods on record over the last 106 years. We would not predict the future on the basis of such an anomaly. TVA commented that it appreciated our recognition of its progress in improving its financial condition, but objected to our findings that TVA’s financial condition compares unfavorably to likely competitors. In particular, TVA questioned our choice of financial ratios in comparing it to other utilities. TVA noted that most of our ratios ignore total cost and merely reflect the differences between the capital structure of TVA and that of IOUs. We disagree with TVA in this regard. Our choice of ratios was appropriate because they result in meaningful information regarding the relative financial conditions of the entities. To assess the financial condition of the entities, we selected two types of ratios. The first type indicates an entity’s financial flexibility to successfully respond to competitive and financial challenges. In this regard, we compared TVA to other utilities in terms of (1) total financing costs, (2) fixed financing costs, and (3) the ability of an entity to pay for capital expenditures and common stock dividends from net cash generated from operations. Each of these ratios is an indicator of an entity’s ability to withstand stressful financial conditions. Interest costs are particularly important to consider because they are fixed costs that must be paid even in times of competitive or financial pressures. Contrary to TVA’s comment letter, we recognize the differences between TVA’s financial structure and those of IOUs and accounted for those differences in performing our analyses. As our report notes, TVA’s financing (except for internally generated cash, as with all entities we assessed) is obtained by issuing debt, while IOUs also have the option of equity financing. The requirement that TVA obtain financing only by issuing debt could be considered a competitive disadvantage because of the corresponding fixed financing costs which affect TVA’s financial flexibility. The ratio of total financing cost to revenue compares TVA interest costs, as a percent of revenue, to the IOUs’ costs of (1) interest, (2) preferred stock dividends, and (3) common stock dividends as a percent of revenue. The ratio of fixed financing costs to revenue compares TVA interest costs, as a percent of revenue, to the fixed portion of the IOUs’ financing costs (i.e., their interest costs and preferred stock dividends) as a percent of revenue. These analyses appropriately adjust for the different financing structures of the entities in assessing financing costs, and assessing the extent to which entities have fixed costs that limit their financial flexibility is a valid means by which to consider their respective financial conditions. The second type of financial ratio we used indicates the extent to which capital costs, including the costs of deferred assets, have been recovered. In this regard, we compared TVA to other utilities in terms of the (1) portion of capital assets that has not begun to be included in rates and (2) the portion of gross property, plant, and equipment that has already been recovered. These indicators are important because a high level of unrecovered capital costs could compound an entity’s challenges as it enters a competitive market. In the case of TVA, if it enters a competitive environment with the relatively high debt service costs it now carries, its ability to price its power competitively could be jeopardized, thus increasing its potential for stranded costs. Our report notes that TVA’s competitive challenges would be even greater if it were at the same time attempting to recover the costs of deferred assets through rates. We disagree with TVA’s statement that a single statistic—the residential price of electricity in TVA’s region—best reflects TVA’s competitiveness. While we agree that selling price is a function of cost, we note that TVA has a large amount of unrecovered costs. Since TVA remains in a regulated environment, with the ability to set its own rates and to recover or defer recovering the costs of some of its capital assets, this single statistic does not provide a complete picture of TVA’s costs nor its ability to operate in a competitive environment. In addition, TVA’s current cost of delivering power does not provide a complete picture of the competitive environment TVA would likely be subject to if its legislative protections and the benefits of being a wholly owned government corporation were removed. We also disagree with TVA’s statement that our ratios are distorted because they do not recognize the uniqueness of TVA’s business compared to others. According to TVA, a distortion results when TVA, which has predominantly wholesale sales, is compared to other entities that have predominantly retail sales. However, these other entities also sell at wholesale and would be competing with TVA at that level. Regardless, an entity’s fixed costs and portion of capital assets that have not been recovered are relevant and important considerations as one considers an entity’s prospects in a competitive market, be it wholesale or retail. We also note that, in its comment letter, TVA compared its total production costs to those of the 50 largest power producers in the United States, which for the most part are providers of retail power, but objected to our comparing TVA to some of the same utilities. TVA states “the report is misleading when it implies that the historical accounting value of any particular set of assets determines the potential for stranded costs,” and that it is the net market value of all assets combined that is germane to the determination of stranded costs, and only if their amortization drives total cost above market. While we do not disagree with TVA’s interpretation of stranded costs, we do disagree that historical accounting value plays no part in determining stranded costs. Historical accounting value, less accumulated depreciation and/or amortization, shows the amount of remaining capital costs to be recovered in the future. If TVA is attempting to recover more of these costs than other utilities in a competitive market and, as a result, its rates are above market, it could have stranded costs. TVA also implies that we consider its deferred assets to be a proxy for stranded costs. On the contrary, our report clearly states that TVA could have stranded costs if it were unable to recover all its costs when selling power at or below market rates. In addition, we state that TVA’s potential for stranded costs relates to its high debt and deferred assets, which as of September 30, 2000, totaled about $26 billion and $8 billion, respectively. Recovery of these costs could drive the price of TVA’s power above market, leading to stranded costs. This is consistent with TVA’s definition of stranded costs. Our report reaches no conclusion on whether TVA will have stranded costs; it merely points out that if TVA is unable to price its power competitively because it is attempting to recover costs it incurred in the past, it could have potential stranded costs, depending on market conditions at the time. As noted above, due to the uncertainty of the future market price of power, we also do not conclude on whether TVA will be competitive in the future. TVA notes that it has made progress in reducing debt, and corresponding interest expense, and recovering the costs of deferred assets since it released its 1997 plan. For example, by the end of fiscal year 2001, TVA expects to have reduced its debt by about $2.2 billion and its annual interest expense by about $300 million, and expects to have recovered about $2 billion in costs associated with its deferred assets. While we agree that TVA is moving in the right direction, TVA’s current proposed revisions to its 10-year plan project significantly less progress than envisioned in 1997 and these changes are not without consequence. As our report states, TVA’s current revisions to the plan estimate that debt outstanding at the end of fiscal year 2007 will be about $19.6 billion versus the $13.2 level anticipated when TVA issued its 1997 plan. TVA notes that since issuing the plan in 1997, it changed its strategy by investing cash in new generating capacity that otherwise would have been used for debt reduction. However, in our report we correctly point out that, while TVA has made this change, the cash it has invested in new capacity is far less than its debt reduction shortfall. TVA’s current projections show its debt reduction through 2007 being about $6.4 billion less than planned in 1997, and its investment in new generating capacity about $1.3 billion more. As a consequence of this debt reduction shortfall, we estimate that TVA’s interest expense in 2008 will be about $416 million greater than if it had reduced debt to $13.2 billion. In the 1997 plan, one of TVA’s key stated objectives was to “alter its cost structure from its currently rigid, high fixed-to-variable cost relationship to a structure that is more flexible and better able to adjust to a volatile marketplace.” TVA’s 1997 plan further stated that interest expense is the cost component that, more than any other, challenges TVA’s ability to provide power at projected market rates in the future. This situation continues to be true today. However, as we state in our report, ultimately, TVA’s ability to be competitive will depend on the future market price of power, which cannot be predicted with any certainty. To the extent TVA is able to improve the financial ratios set out in our report, the better positioned it will be to deal with this future uncertainty. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this letter until 7 days from its date. At that time, we will send copies of this report to appropriate House and Senate Committees; interested Members of Congress; Craven Crowell, Chairman, TVA’s Board of Directors; The Honorable Spencer Abraham, Secretary of Energy; The Honorable Mitchell E. Daniels, Jr., Director, Office of Management and Budget; and other interested parties. The letter will also be available on GAO’s home page at http://www.gao.gov. We will also make copies available to others upon request. Please call me at (202) 512-9508 if you or your staffs have any questions. Major contributors to this report are listed in appendix IV. We were asked to answer specific questions regarding TVA’s (1) debt and deferred assets, (2) financial condition, (3) potential stranded costs, and (4) bond rating and its impact on TVA’s interest costs. As agreed with your offices, this report addresses the first three questions. We plan to issue a separate report to address the fourth question. Specifically, for each of these three areas, you asked us to determine: 1. Debt and deferred assets The progress TVA has made in achieving the goals of its 10-year business plan for reducing debt and deferred assets. The extent to which TVA has used the additional revenues generated from its 1998 rate increase to reduce debt and deferred and regulatory assets. How TVA’s financial condition, including debt and fixed cost ratios, compares to neighboring investor-owned utilities (IOUs). The link between TVA’s debt and its potential stranded costs. Whether TVA has calculated potential stranded costs for any of its distributors, and if so, what methodology they used. TVA’s options for recovering any potential stranded costs. To identify the progress TVA has made in achieving the goals of its 10-year business plan for reducing debt and deferred assets, we reviewed GAO’s prior report on TVA’s 10-year Business Plan; interviewed TVA and Congressional Budget Office (CBO) officials; reviewed and analyzed various TVA reports and documents, including annual reports, audited financial statements, TVA’s 10-year business plan, and proposed updates to the plan; and analyzed supporting documentation (analytical spreadsheets, etc.) and assumptions underlying TVA’s 10-year plan and proposed updates to the plan. To identify the extent to which TVA has used the additional revenues generated from its 1998 rate increase to reduce debt and deferred and regulatory assets, we obtained an estimate from TVA of the amount of additional revenue generated from its 1998 rate increase; analyzed sales and revenue data in the supporting schedules to the proposed revision to the 10-year plan to determine whether TVA’s estimate was reasonable; and compared the estimate of the amount of additional revenue generated from the 1998 rate increase to the reduction in debt and deferred assets over the first 3 years of the plan. To determine how TVA’s financial condition, including debt and fixed ratios, compares to its likely competitors, we reviewed prior GAO reports on TVA that analyzed its financial determined likely competitors by analyzing prior GAO reports and other reports by industry experts; obtained and analyzed financial data from the audited financial statements of TVA, seven IOUs, and one independent power producer; computed and compared key financial ratios for TVA and the other eight reviewed the annual reports of the eight entities to determine what steps they have taken to financially prepare themselves for competition; interviewed TVA officials about their efforts to position themselves competitively, including their efforts to reduce debt, recover the costs of their capital assets, and recover stranded costs, and analyzed data on the future market price of power. The ratios we used in our comparison were computed as follows: The ratio of financing costs to revenue was calculated by dividing financing costs by operating revenue for the fiscal year. The financing costs include interest expense on short-term and long-term debt, payments on appropriations (TVA only), and preferred and common stock dividends (IOUs only). Note that preferred and common stock dividends were included in the IOUs’ financing costs to reflect the difference in the capital structure of these entities and TVA. The ratio of fixed financing costs to revenue was calculated by dividing financing costs less common stock dividends by operating revenue for the fiscal year. Common stock dividends were excluded from the IOUs’ financing costs because, since they are not contractual obligations that must be paid, they are not fixed costs. The ratio of net cash from operations to expenditures for PP&E and common stock dividends was calculated by dividing net cash from operations by expenditures for PP&E and common stock dividends for the fiscal year. Net cash from operations represents the cash received from customers minus the cash paid for operating expenses. Thus, net cash from operations represents the cash available for expenditures for PP&E, common stock dividends (IOUs only), and other investing and financing transactions. Again, we included common stock dividends in the IOUs ratios to reflect the difference in cash flow requirements for these entities and TVA. Preferred stock dividends were not included because they come out of operating revenues and thus are already reflected in the net cash figure. Because these data were not available for all entities, we excluded the effect of capital assets acquired through acquisition. The ratio of accumulated depreciation and amortization to gross PP&E was calculated by dividing accumulated depreciation and amortization by gross PP&E at fiscal year-end. The ratio of deferred assets to gross PP&E was calculated by dividing deferred assets by gross PP&E at fiscal year-end. Deferred assets include construction in progress and for TVA only, its deferred nuclear units. Deferred nuclear units are included for TVA because TVA treats them as construction in progress (i.e., not depreciated). For comparison purposes, we selected the major IOUs that border on TVA’s service area because industry experts told us that due to the high cost of transmitting electricity, TVA’s competition would most likely come from IOUs located close to its service area. However, to represent the changing structure of the electricity industry, we included one large independent power producer. We did not include any publicly owned utilities in our analysis because the publicly owned utilities that provide electricity in the states served by our IOU comparison group generally only distribute but do not generate electricity. The IOUs used in our comparisons include (1) American Electric Power, (2) Carolina Power & Light, (3) Dominion Resources, (4) Duke Power, (5) Entergy, (6) LG&E Energy Corporation, and (7) Southern Company. The independent power producer was AES Corporation. To obtain information on various issues facing utilities in a restructuring industry, we reviewed documents from the Energy Information Administration (EIA) and the annual reports of TVA and the IOUs. We also spoke with the organization that represents TVA’s distributors to understand their concerns about TVA’s future competitiveness. In addition, we contacted financial analysts to identify the criteria they use to evaluate the financial condition of electric utilities. To identify the link between TVA’s debt and its potential stranded costs, we interviewed industry experts at the Federal Energy Regulatory Commission, Edison Electric Institute (EEI), and CBO on the options other utilities have pursued to recover stranded costs; reviewed EIA documents pertaining to how stranded costs have been dealt with in the states that have restructured; questioned TVA officials on TVA’s plans for mitigating, calculating, and recovering potential stranded costs; and analyzed TVA’s contracts to determine whether TVA has contractually relieved its customers of any obligation to pay for stranded costs. To determine whether TVA has calculated stranded costs that could potentially be assessed against it distributors, and if so, the methodology used, we questioned TVA officials on whether they had calculated potential stranded costs for any of its distributors and obtained information on the methodology TVA used to calculate potential stranded costs for the two distributors who informed TVA of their intent to leave its system. To identify the options for recovering any potential stranded costs at TVA, we obtained and analyzed information from EIA, EEI, and CBO regarding the mechanisms for stranded cost recovery that have been used in states that have restructured their electricity industries and interviewed FERC officials and reviewed FERC documents pertaining to stranded cost recovery. We conducted our review from April 2000 through January 2001 in accordance with generally accepted government auditing standards. To the extent practical, we used audited financial statement data in performing our analyses, or reconciled the data we used to audited financial statements; however, we were not able to do so in all cases and we did not verify the accuracy of all the data we obtained and used in our analyses. Information on TVA’s debt reduction, deferred asset recovery and projection of the future market price of power was based on TVA’s anticipated changes to the 10-year plan at the time of our review. During the course of our work, we contacted the following organizations. Congressional Budget Office, Department of Energy’s Energy Information Administration, Federal Energy Regulatory Commission, Office of Management and Budget, and Tennessee Valley Authority. Moody’s Investors Service, New York, New York, and Standard & Poor’s, New York, New York. Tennessee Valley Public Power Association, Chattanooga, Tennessee. Federal Accounting Standards Advisory Board, Washington, D.C., Edison Electric Institute, Washington, D.C., and Standard & Poor’s DRI, Lexington, Massachusetts. In the states that have restructured their electricity industries, there have been three commonly used mechanisms to mitigate stranded costs. These mitigation measures can be employed either before or during restructuring. Depending on the approval of state regulators, utilities have the following options: Securitization − Under securitization, state restructuring legislation authorizes a utility to receive the right to a stream of income from ratepayers, such as a competitive transition charge. The utility turns over that right to a state bank for cash, thus effectively refinancing present debt and trading a regulated income stream for a lump sum of money. The state bank issues debt (i.e., sells bonds) secured by future customer payments or the competitive transition charge on customers' bills. The benefits from securitization stem from lower financing costs − the state bonds generally are free from state tax and have a higher rating than the utility, thus reducing interest expense. Therefore, the customer surcharge required to pay off the bonds is less than the charge that would be necessary to produce the same amount of money had the utility issued the bonds itself. Mitigation before restructuring − With this option, regulators allow a utility to take steps to reduce potential stranded costs before full restructuring is implemented, including allowing accelerated depreciation. To the extent a utility is permitted to mitigate potential stranded costs, customers benefit. Mandatory asset divestiture − Requiring a utility to divest itself of generating assets produces revenue that can be used to recover potential stranded costs, potentially benefiting all customers. When a utility sells its assets, it can use the cash to reduce debt. At the same time, it no longer has to recover the cost of those assets, making its power potentially more competitive. However, it also must now purchase power and is thereby subject to market risk. In addition, proceeds from the sale are assumed to cover the book value of the asset; if not, potential stranded costs remain. Also, asset divestiture affects stockholders; to the extent a utility receives cash in excess of book value, stockholders benefit. In addition to the individual named above, Richard Cambosos, Jeff Jacobson, Joseph D. Kile, Mary Merrill, Donald R. Neff, Patricia B. Petersen, and Maria Zacharias made key contributions to this report. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system)
If the Tennessee Valley Authority (TVA) were to lose its legislative protections today, its high level of debt and corresponding high financing costs would be a competitive challenge. This competitive challenge would be even greater if it were at the same time attempting to recover costs of deferred assets through rates. Despite having reduced its debt and deferred assets over the past three years, TVA still compares unfavorably to its likely competitors in these areas. In addition, TVA is revising its goals for reducing debt and deferred assets downward significantly. Whether or not the deferred assets will contribute to stranded costs that are recoverable from customers depends on the specific requirements of any legislation that might remove TVA's legislative protections and TVA's ability to retain its current competitive advantages in a restructured environment. In addition, the longer that TVA has to prepare for competition, the longer it will have to reduce debt and recover the costs of its deferred assets and position itself more competitively. Ultimately, TVA's ability to be competitive will depend on the future market price of power, which cannot be predicted with any certainty.
The federal government buys a myriad of goods and services from contractors. Federal agency acquisitions must be conducted in accordance with a set of statutes and regulations designed to accomplish several objectives, including full and open competition and various social and economic goals, such as encouraging small business participation. In the late 1980s and early 1990s, some became convinced that the federal procurement system had become complex, unwieldy, and overwrought with tension between the basic goals of efficiency and fairness because of a proliferation of requirements governing almost every aspect of the acquisition process. In this environment, there were concerns about the government's ability to take full advantage of the opportunities offered by the commercial marketplace. In response to these concerns, Congress enacted two major pieces of reform legislation, FASA and Clinger-Cohen, aimed at creating a more efficient and responsive federal acquisition system. Concerns remain about whether the changes brought about by acquisition reform during the 1990s have come at the expense of placing small business at a disadvantage. The federal procurement process underwent many legislative and administrative changes during the 1990s, some of which have the potential to affect the ability of small businesses to obtain federal contracts. Other changes occurred during this time, such as reductions in the amount the government spent on goods and services and the size of its acquisition workforce, which agency officials believe have also encouraged procurement streamlining. These changes included the use of certain contract vehicles, such as MACs. In addition, reforms have modified the dollar range of contracts that are reserved for small businesses and encouraged the use of purchase cards, which are similar to corporate credit cards, for the use of certain purchases. Some organizations that represent small businesses are concerned that these changes could potentially erode the ability of small businesses to receive federal contracts. At the same time that acquisition reform legislation was enacted, other factors changed how much the federal government bought as well as the way it buys goods and services. During the 1990s, the federal government decreased the amount spent on goods and services and downsized the acquisition workforce. The total amount of goods and services that the government purchased, including those bought with purchase cards, declined by about 7 percent from an inflation-adjusted $224 billion in fiscal year 1993 to $209 billion in fiscal year 1999. Consequently, all businesses had to compete for a reduced total of federal contract expenditures. Figure 2 shows the trend in total federal procurement expenditures during this period. Federal agencies also reduced their acquisition workforce personnel from 165,739 in fiscal year 1990 to 128,649 in fiscal year 1998, or approximately 22 percent, during this time, with many of these reductions taking place at the Department of Defense (DOD). According to agency officials, contracting officials have sought ways to streamline procurement practices within the applicable statutes and regulations partly as a result of these workforce reductions; this includes the use of previously authorized contracting vehicles such as blanket purchase agreements (BPA), indefinite-delivery indefinite-quantity (IDIQ) contracts, and GSA federal supply schedule contracts. Appendix I provides a description of these contract vehicles. Contract bundling is an acquisition practice that received a lot of attention in the 1990s and is often associated with but, in fact, is not actually contained in acquisition reform legislation enacted during this period. Federal agencies combine existing contracts into fewer contracts as a means of streamlining as well as reducing procurement and contract administration costs, a practice generally referred to as “contract consolidation.” A subset of consolidated contracts is “bundled contracts” that the Small Business Reauthorization Act of 1997 defines as the consolidation of two or more procurement requirements for goods or services previously provided or performed under separate, smaller contracts into a solicitation of offers for a single contract that is likely to be unsuitable for award to a small business concern due to the diversity, size, or specialized nature of the elements of the the aggregate dollar value of the anticipated award; the geographic dispersion of contract performance sites; or any combination of these three criteria. This act requires each federal agency, to the maximum extent practicable, to (1) promote participation of small businesses by structuring its contracting requirements to facilitate competition by and among small businesses and (2) avoid the unnecessary and unjustified bundling of contracts that are likely to be unsuitable for small business participation as prime contractors. Federal policy has also encouraged the use of governmentwide commercial purchase cards for micropurchases. The purchase card, issued to a broad range of authorized agency personnel to acquire and pay for goods and services, is similar in nature to a corporate credit card and is the preferred method for purchases of $2,500 or less. Some organizations that represent small businesses believe that the purchase card makes it easier for government personnel to make purchases from sources other than small businesses because that may be more convenient for the purchaser. Small businesses, as a group, have received the legislatively mandated goal for federal contract expenditures each fiscal year from 1993 to 1999. Between fiscal years 1993 and 1997, when the legislative goal was at least 20 percent, small businesses received between 24 and 25 percent of total federal contract expenditures. In both fiscal years 1998 and 1999, when the goal increased to 23 percent, small businesses received 23 percent of total federal contract expenditures. Focusing on expenditures for new contracts worth over $25,000, our analysis shows that small businesses have received between 25 and 28 percent of these expenditures during this period. In addition, focusing on the various categories of goods and services that the federal government purchases, small businesses received a higher share in fiscal year 1999 of expenditures in new contracts for most categories of goods and services than they did in fiscal year 1993. Several contract vehicles accounted for about one quarter of all governmentwide expenditures for contracts over $25,000 in fiscal year 1999, and small businesses received between 26 and 55 percent of expenditures for these contract vehicles in that year. We could not determine the amount or impact of contract bundling or the impact of the increased use of government purchase cards on small businesses. Although FASA requires that contracts over $2,500 up to $100,000 generally be reserved exclusively for small businesses, we could not determine the amount of expenditures for these contracts because, in some cases, information is reported to FPDC on contracts together with modifications. SBA and FPDC data indicate that federal agencies, as a whole, have met their annual governmentwide small business procurement goal from fiscal years 1993 to 1999. This legislative goal increased from at least 20 percent of total federal contract expenditures to 23 percent effective fiscal year 1998. Between fiscal years 1993 and 1997, when the legislative goal was at least 20 percent, small businesses received between 24 and 25 percent of total federal contract expenditures. In fiscal years 1998 and 1999, when the legislative goal increased to 23 percent, small businesses received 23 percent of total federal contract expenditures. Figure 3 shows the share of total federal contract expenditures going to small businesses for this period. Under the Small Business Act, SBA has authority to prescribe a method to measure the participation of small businesses in federal procurement. In calculating the actual achievement of small business procurement goals for individual federal agencies, SBA excludes certain categories of procurements from the base, or denominator. SBA has identified several categories of procurements that are excluded from the base because SBA officials believe that small businesses do not have a reasonable opportunity to compete for them, including (1) foreign military sales; (2) procurement awarded and performed outside the United States; (3) purchases from mandatory sources of supplies as listed in the Federal Acquisition Regulation; and (4) purchases for specific programs from the Departments of State, Transportation, and the Treasury. SBA's Office of Advocacy disagrees with SBA's approach of excluding categories of procurements in establishing the base. Adding back the categories of procurement that SBA excluded, the Office of Advocacy reported that small businesses received about 21 percent of total federal procurement in fiscal year 1998 (rather than the 23 percent that SBA reported) and that, therefore, the governmentwide goal for small business procurement was not met in fiscal year 1998. Some organizations that represent small businesses have expressed concerns that small businesses are at a disadvantage when competing for new federal contracts. Therefore, we analyzed the share of expenditures for new contracts going to small businesses. These data do not include modifications to existing contracts, which account for approximately half of all governmentwide procurement expenditures during this time. Our analysis of FPDS data of new contract expenditures shows that small businesses have received between 25 and 28 percent of such expenditures for contracts worth more than $25,000 between fiscal years 1993 and 1999. Figure 4 shows the results of our analysis. In calculating the share of total expenditures on new contracts going to small businesses from fiscal years 1993 to 1999, we used FPDC data on expenditures for new contracts worth more than $25,000 and did not exclude the types of expenditures that SBA excludes to calculate the small business procurement goal. As noted in figure 2, the federal government has been spending less money on goods and services since fiscal year 1993. The only categories of goods and services that experienced increases in governmentwide purchases on new contracts worth more than $25,000 between fiscal years 1993 and 1999 were real property and other services. Despite this overall decline in contract purchases, small businesses received a higher share in fiscal year 1999 than in fiscal year 1993 of expenditures on new contracts worth $25,000 or more than for 5 of the 8 categories of goods and services of government procurement: equipment, research and development, architect and engineering, automatic data processing services, and other services. Figure 5 shows governmentwide trends for purchases under new contracts of goods and services worth more than $25,000 and the share of these purchases going to small businesses. We analyzed FPDS data on the governmentwide use of certain contract vehicles for contracts over $25,000, including those that became popular during the 1990s. We found that these vehicles represent a small but growing share of federal procurement expenditures. Because FPDS only captures data for some of these contract vehicles, we had to limit our analysis to MACs, IDIQs, BPAs, and GSA schedules. Expenditures for the four types of contract vehicles we analyzed represented 25 percent of federal procurement expenditures on contracts over $25,000 in fiscal year 1999, compared with 16 percent in fiscal year 1994. Small businesses received 32 percent of expenditures for these contract vehicles in fiscal year 1999 compared with 24 percent in fiscal year 1994. For each of the four types of contract vehicles in our analysis, the share of expenditures going to small businesses was between 26 and 55 percent in fiscal year 1999, depending on the type of contract vehicle. For example, expenditures going to small businesses for MACs increased from $524 million in fiscal year 1994, or 8 percent of all expenditures for MACs, to $2 billion in fiscal year 1999, or 26 percent of all expenditures for MACs. Expenditures going to small businesses for IDIQs from fiscal years 1994 to 1999 remained relatively stable, near $7 billion. The percentage of total expenditures for IDIQs going to small businesses increased from 24 percent of total expenditures for IDIQs in fiscal year 1994 to 28 percent in 1999. The small business share of GSA schedules increased from 27 percent in fiscal year 1994 to 36 percent in fiscal year 1999, from $523 million to $3 billion. Finally, the small business share of BPAs fell from 97 percent in fiscal year 1994 to about 55 percent in fiscal year 1999, although the expenditures increased for small businesses from about $141 million in fiscal year 1994 to about $2 billion in fiscal year 1999. In conducting a review of contract bundling in 2000, we found that there are only limited governmentwide data on the extent of contract bundling and its actual effect on small businesses. Federal agencies do not currently report information on contract bundling to FPDC; therefore, FPDC does not have data on this topic. Our review of consolidated contracts worth $12.4 billion at 3 procurement centers showed that the number of contractors and the contract dollars were generally reduced due to consolidation as agencies sought to streamline procurement and reduce its associated administrative costs. SBA determined that the consolidation of the contracts we reviewed did not necessarily constitute bundling. In fact, 2 of the largest consolidated contracts involved only large businesses and the remaining 11 consolidated contracts were awarded to small businesses. We analyzed the total amount of governmentwide purchase-card expenditures for fiscal years 1993 to 1999 and found that in fiscal year 1999 such expenditures totaled $10 billion, or about 5 percent, of all federal procurement purchases. As figure 6 shows, these purchases have steadily increased since 1993, when the total amount bought with purchase cards was $527 million. These data include expenditures for all purchase-card transactions, both under and over $2,500. FASA permits purchases for goods or services up to $2,500 from any qualified suppliers. Since FPDS does not collect detailed data on purchase- card expenditures, we could not determine what share of such governmentwide expenditures are going to small businesses. We requested comments on a draft of this report from the Administrator of SBA, the Director of OMB, and the Administrator of GSA. SBA's Chief Operating Officer provided written comments in concurrence with our report. She pointed out that preliminary data for fiscal year 2000 show that federal agencies are finding it more difficult to meet the legislative goal of ensuring that 23 percent of the value of federal prime contracts go to small businesses. We did not include data for fiscal year 2000 in our review because these data are preliminary. Another area of concern was that since detailed data on purchase-card expenditures are not included in the FPDS database, trend analyses of these expenditures were not included in our report. As we note in our report, purchase-card expenditures have increased, but data are not available to determine the share of these purchases going to small businesses. In addition, SBA's Chief Operating Officer made several technical comments that we have reflected in this report, as appropriate. Officials from GSA's Offices of Enterprise Development and Governmentwide Policy provided technical comments that we have addressed in this report, as appropriate. OMB had no comments on our draft report. The comments we received from SBA are in appendix III. To identify procurement changes that could affect small business contractors, we reviewed FASA, the Clinger-Cohen Act, the Small Business Reauthorization Act of 1997, and the Federal Acquisition Regulation. We also identified other changes that occurred during the 1990s that might have an effect on small businesses by interviewing agency officials and representatives of industry associations, and by reviewing agency documents. We met with officials from GSA, SBA, OMB's Office of Federal Procurement Policy (OFPP), and the Procurement Executives Council. We also met with representatives of the U.S. Chamber of Commerce, Small Business Legislative Council, and Independent Office Products and Furniture Dealers Association. To determine the trends in federal procurement from small businesses, we analyzed data from the Federal Procurement Data Center's (FPDC) Federal Procurement Report for fiscal years 1993 through 1999 and other data we requested from FPDC and SBA for those same years. FPDC administers the Federal Procurement Data System (FPDS) within GSA. Since FPDC relies on federal agencies to report their procurement information, these data are only as reliable, accurate, and complete as the agencies report. In 1998, FPDC conducted an accuracy audit and reported that the average rate of accurate reporting in the FPDS database was 96 percent. Our analyses focused on total contract expenditures for federal procurement and the percentage of expenditures going to small businesses for new contracts and for certain contract vehicles. Unless otherwise noted, all expenditures were adjusted for inflation and represent constant fiscal year 1999 dollars. We conducted our review between March and October 2000 in accordance with generally accepted government auditing standards. A detailed discussion of our objectives, scope, and methodology is presented in appendix II. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report for 30 days. At that point, copies of this report will be sent to appropriate congressional committees and other interested Members of Congress; the Administrator of the Small Business Administration; the Administrator of the General Services Administration; the Director of the Office of Management and Budget; and other interested parties. We will also make copies available to others on request. Staff acknowledgements are listed in appendix IV. If you or your staff have any questions about this report, please contact me at (202) 512-8984 or Hilary Sullivan at (214) 777-5652. Indefinite-Delivery, Indefinite-Quantity Contract: This type of contract provides for an indefinite quantity, within stated limits, of goods or services during a fixed period of time. Agencies place separate task or delivery orders for individual requirements that specify the quantity and delivery terms associated with each order. The Federal Acquisition Regulation (FAR) expresses a preference for multiple awards of these contracts, which allows orders to be placed using a streamlined, commercial style selection process where consideration is limited to the contract awardees. The competition between the multiple awardees is designed to encourage better prices and responses than if the agency were negotiating with a single contractor. Contractors are to be afforded a fair opportunity to be considered for award of task and delivery orders but cannot generally protest the award of such orders. Indefinite-delivery, indefinite-quantity contracts include GWACs and GSA federal supply schedule contracts. Federal Supply Schedules: Under the schedule program, GSA enters into indefinite-delivery, indefinite-quantity contracts with commercial firms to provide commercial goods and services governmentwide at stated prices for given periods of time. Authorized buyers at agencies place separate orders for individual requirements that specify the quantity and delivery terms associated with each order, and the contractor delivers products or services directly to the agency. The program is designed to provide federal agencies with a simplified process for obtaining millions of commonly used commercial supplies and services at prices associated with volume buying. The program consists of single award schedules with one supplier and multiple award schedules, in which GSA awards contracts to multiple companies supplying comparable services and products, often at varying prices. When agency requirements are to be satisfied through the use of multiple award schedules, the small business provisions (such as the exclusive reservation for small businesses for contracts over $2,500 up to $100,000) of the FAR do not apply. Blanket Purchase Agreement: A simplified method of filling anticipated repetitive needs for supplies or services by establishing “charge accounts” with qualified sources of supply, and may include federal supply schedule contractors. Under such an agreement, the contractor and the agency agree to contract clauses applying to future orders between the parties during its term. Future orders would incorporate, by reference or attachment, clauses covering purchase limitations, authorized individuals, itemized lists of supplies or services furnished, date of delivery or shipments, billing procedures, and discounts. Under the FAR, the existence of a blanket purchase agreement does not justify purchasing from only one source or avoiding small business preferences. Our objectives were to identify (1) provisions in acquisition reform legislation enacted in the 1990s and other changes in procurement taking place during this time that could affect small business contractors and (2) trends that might indicate possible shifts in the ability of small businesses to obtain federal contracts in the 1990s. To achieve our first objective, we analyzed several pieces of legislation enacted in the 1990s, federal acquisition regulations, governmentwide procurement data, and interviewed federal officials at several agencies. We examined the Federal Acquisition Streamlining Act of 1994 (FASA), the Clinger-Cohen Act of 1996, the Small Business Reauthorization Act of 1997, and the Federal Acquisition Regulation. We analyzed governmentwide procurement data reported by GSA's Federal Procurement Data Center (FPDC) and data on the governmentwide acquisition workforce reported by GSA's Federal Acquisition Institute in its Report on the Federal Acquisition Workforce for fiscal years 1991 and 1998. We interviewed officials at GSA, OFPP, SBA, and the Procurement Executives Council. We also interviewed representatives of the U.S. Chamber of Commerce, Small Business Legislative Council, and Independent Office Products and Furniture Dealers Association. To achieve our second objective, we gathered governmentwide data on federal procurement from FPDC and SBA for fiscal years 1993 through 1999. We could not determine the direct impact of legislative changes and other trends on small businesses because of the numerous concurrent factors and the insufficiency of governmentwide data to directly measure the effect of these changes on small business contractors. Federal agencies report procurement data to FPDC in two categories, (1) contract awards of $25,000 or less each and (2) contract awards greater than $25,000. Each agency reports summary data on contracts worth $25,000 or less to FPDC and includes information such as type of contractor and procurement methods. Agencies report greater detail on each individual contract over $25,000 or more, including type of contract action, type of contractor, and product or service purchased. We analyzed aggregate data reported in FPDC's Federal Procurement Report for each of the years. We requested additional data from FPDC for contracts over $25,000 to include information on expenditures going to small businesses for new contracts; total expenditures going to small businesses, including for new contracts and contract modifications, for specific contract vehicles; and expenditures going to small businesses for new contracts for all products and services. The data on new contracts that FPDC provided includes expenditures on original contract actions, as opposed to expenditures on modifications to existing contracts. FPDC categorizes all federal contract expenditures into eight broad categories of products and services. According to FPDC officials, FPDS is updated constantly as federal agencies report updated procurement information. The data we received from FPDC are as of July 2000. In addition, we analyzed the summary information on government purchase- card transactions from the Federal Procurement Report for each year. We also collected data from SBA and FPDC on the achievement of the governmentwide federal procurement goal for small businesses. The SBA data on the achievement of this goal for fiscal years 1993 through 1997 are from The State of Small Business. Because the most recent version The State of Small Business was published in fiscal year 1997, we used FPDC data published in its annual Federal Procurement Report on the achievement of the legislative goal for fiscal years 1998 and 1999. As indicated earlier, SBA began using FPDS data to calculate the achievement of the small business legislative goal as of fiscal year 1998. Although FASA requires that contracts over $2,500 up to $100,000 be exclusively reserved for small businesses, we could not determine the amount of expenditures or share going to small businesses for these contracts because, in some cases, information is reported to FPDC on contracts commingled with modifications. Unless otherwise noted, we adjusted all dollar amounts using a gross domestic product price index from the Bureau of Economic Analysis using fiscal year 1999 as the base year. We did not independently verify FPDC or SBA data. FPDC relies on agencies to report their procurement information. Therefore, data are only as reliable, accurate, and complete as the agencies report. In 1998, however, FPDC conducted an accuracy audit of some of its data elements and reported that the average rate of accurate reporting in the FPDS database was 96 percent. We performed our work at SBA headquarters, OFPP, and GSA headquarters. We conducted our review between March and October 2000 in accordance with generally accepted government auditing standards. Jason Bair, William Chatlos, James Higgins, Maria Santos, Adam Vodraska, and Wendy Wilson made key contributions to this report. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system)
This report focuses on the trends in federal procurement for small businesses during the 1990s. Some organizations that represent small businesses and others have expressed concerns over acquisition reforms in the mid-1990s that may have reduced the opportunities for small businesses to compete for federal government contracts. These reforms sought to streamline acquisition processes to help government acquire goods and services more efficiently. The reforms included provisions to facilitate greater use of some types of contracts. However, small business representatives believe that some of these reforms could make it difficult for small businesses to compete for federal contracts. For example, the Clinger-Cohen Act authorizes the use of multiagency contracts. These contracts could potentially consolidate agencies' requirements, which small businesses may not be able to meet. At the same time, some procurement reforms have benefited small businesses. The Federal Acquisition Streamlining Act, for example, increased the value of contracts set aside exclusively for small business participation. In addition, the Small Business Reauthorization Act of 1997 increased the percentage of federal contracts to be awarded to small businesses to 23 percent. Small Business Administration data indicate that federal agencies met the legislative goal for procurement from small businesses from fiscal years 1993 to 1999.
In June 1997, we reported on the results of our interviews with state WIC officials in 8 states that had unspent federal funds in fiscal year 1995 and 2 states that did not have unspent funds that year. These state officials identified a variety of reasons for having unspent federal WIC funds that were returned to the U.S. Department of Agriculture’s (USDA) Food and Nutrition Service (FNS) for reallocation. In fiscal year 1996, the states returned about $121.6 million, or about 3.3 percent, of that year’s $3.7 billion WIC grant for reallocation to the states in the next fiscal year. Some of the reasons cited by the WIC directors for not spending all available funds related to the structure of the WIC program. For example, the federal grant is the only source of funds for the program in most states. Some of these states prohibit agency expenditures that exceed their available funding. As a result, WIC directors reported that they must be cautious not to overspend their WIC grant. Because WIC grants made to some states are so large, even a low underspending rate can result in millions of returned grant dollars. For example, in fiscal year 1995, California returned almost $16 million in unspent WIC funds, which represented about 3 percent of its $528 million federal grant. Unlike California, New York State had no unspent grant funds in fiscal year 1995. New York was one of 12 states that supplemented its federal WIC grant with state funds that year and hence did not have to be as cautious in protecting against overspending its federal grant. Overall, the group of states that supplemented their WIC grants in fiscal year 1995 returned a smaller percentage of their combined WIC funds than did the states that did not supplement their federal grants. States also had unspent federal funds because the use of vouchers to distribute benefits made it difficult for states to determine program costs until the vouchers were redeemed and processed. Two features of the voucher distribution method can contribute to the states’ difficulty in determining program costs. First, some portion of the benefits issued as vouchers may not be used, thereby reducing projected food costs. Participants may not purchase all of the food items specified on the voucher or not redeem the voucher at all. Second, because of the time it takes to process vouchers, states may find after the end of the fiscal year that their actual food costs were lower than projected. For example, most states do not know the cost of the vouchers issued for August and September benefits until after the fiscal year ends because program regulations require states to give participants 30 days to use a voucher and retailers 60 days after receiving the voucher to submit it for payment. The difficulty in projecting food costs in a timely manner can be exacerbated in some states that issue participants 3 months of vouchers at a time to reduce crowded clinic conditions. In such states, vouchers for August benefits could be provided as early as June but not submitted for payment until the end of October. Other reasons for states having unspent WIC funds related to specific circumstances that affect program operations within individual states. For example, in Texas the installation of a new computer system used to certify WIC eligibility and issue WIC food vouchers contributed to the state’s having unspent funds of about $6.8 million in fiscal year 1996. According to the state WIC director, the computer installation temporarily reduced the amount of time that clinic staff had to certify and serve new clients because they had to spend time instead learning new software and operating procedures. As a result, they were unable to certify and serve a number of eligible individuals and did not spend the associated grant funds. In Florida, a hiring freeze contributed to the state’s having unspent funds of about $7.7 million in fiscal year 1995. According to the state WIC director, although federal WIC funds were available to increase the number of WIC staff at the state and local agency level, state programs were under a hiring freeze that affected all programs, including WIC. The hiring freeze hindered the state’s ability to hire the staff needed to serve the program’s expanding caseload. Having unspent federal WIC funds did not necessarily indicate a lack of need for program benefits. WIC directors in some states with fiscal year 1995 unspent funds reported that more eligible individuals could have been served by WIC had it not been for the reasons related to the program’s structure and/or state-specific situations or circumstances. On the basis of our nationwide survey of randomly selected local WIC agencies, we reported in October 1997 that these agencies have implemented a variety of strategies to increase the accessibility of their clinics for working women. The most frequently cited strategies—used by every agency—are scheduling appointments instead of taking participants on a first-come, first-served basis and allowing other persons to pick up participants’ WIC vouchers. Scheduling appointments reduces participants’ waiting time at the clinic and makes more efficient use of the agency staff’s time. Allowing other persons, such as baby-sitters and family members, to pick up the food vouchers for participants can reduce the number of visits to the clinic by working women. Another strategy to increase participation by working women used by almost 90 percent of local agencies was issuing food vouchers for 2 or 3 months. As California state officials pointed out, issuing vouchers every 2 months, instead of monthly, to participants who are not at medical risk reduces the number of visits to the clinic. Three-fourths of the local WIC agencies had some provision for lunch hour appointments, which allows some working women to take care of their visit during their lunch break. Other actions to increase WIC participation by working women included reducing the time spent at clinic visits. We estimated that about 66 percent of local WIC agencies have taken steps to expedite clinic visits for working women. For example, a local agency in New York State allows working women who must return to work to go ahead of others in the clinic. The director of a local WIC agency in Pennsylvania allows working women to send in their paperwork before they visit, thereby reducing the time spent at the clinic. The Kansas state WIC agency generally requires women to participate in the program in the county where they reside, but it will allow working women to participate in the county where they work when it is more convenient for them. Other strategies adopted by some local WIC agencies include mailing vouchers to working women under special circumstances, thereby eliminating the need for them to visit the clinic (about 60 percent of local agencies); offering extended clinic hours of operation beyond the routine workday (about 20 percent of local agencies offer early morning hours); and locating clinics at or near work sites, including various military installations (about 5 percent of local agencies). Our survey found that about 76 percent of the local WIC agency directors believed that their clinics are reasonably accessible for working women. In reaching this conclusion, the directors considered their clinic’s hours of operation, the amount of time that participants wait for service, and the ease with which participants are able to get appointments. Despite the widespread use of strategies to increase accessibility, 9 percent of WIC directors believe accessibility is still a problem for working women. In our discussions with these directors, the most frequently cited reason for rating accessibility as moderately or very difficult was the inability to operate during evenings or on Saturday because of lack of staff, staff’s resistance to working schedules beyond the routine workday, and/or the lack of safety in the area around the clinic after dark or on weekends. Our survey also identified several factors not directly related to the accessibility of clinic services that serve to limit participation by working women. The factors most frequently cited related to how working women view the program. Specifically, directors reported that some working women do not participate because they (1) lose interest in the program’s benefits as their income increases, (2) perceive a stigma attached to receiving WIC benefits, or (3) think the program is limited to those women who do not work. With respect to the first issue, 65 percent of the directors reported that working women lose interest in WIC benefits as their income rises. For example, one agency director reported that women gain a sense of pride when their income rises and they no longer want to participate in the program. Concerning the second issue, the stigma some women associate with WIC—how their participation in the program makes them appear to their friends and co-workers—is another significant factor limiting participation, according to about 57 percent of the local agency directors. Another aspect of the perceived stigma associated with WIC participation is related to the so-called “grocery store experience.” The use of WIC vouchers to purchase food in grocery stores can cause confusion and delays for both the participant-shopper and the store clerk at the check-out counter. For example, Texas requires its WIC participants to buy the cheapest brand of milk, evaporated milk, and cheese available in the store. Texas also requires participants to buy the lowest-cost 46-ounce fluid or 12-ounce frozen fruit juices from an approved list of types (orange, grapefruit, orange/grapefruit, purple grape, pineapple, orange/pineapple, and apple) and/or specific brands. In comparing the cost of WIC-approved items, participants must also consider such things as weekly store specials and cost per ounce in order to purchase the lowest-priced items. While these restrictions may lower the dollar amount that the state pays for WIC foods, it may also make food selections more confusing for participants. According to Texas WIC officials, participants and cashiers often have difficulty determining which products have the lowest price. Consequently, a delay in the check-out process may result in unwanted attention for the WIC participant. Finally, more than half of the directors indicated that a major factor limiting participation is that working women are not aware that they are eligible to participate in WIC. Furthermore, local agency officials in California and Texas said that WIC participants who were not working when they entered the program but who later go to work often assume that they are then no longer eligible for WIC and therefore drop out of the program. In September 1997, we reported that the states have used a variety of initiatives to control WIC costs. According to the WIC agency directors in the 50 states and the District of Columbia we surveyed, two practices in particular are saving millions of dollars. These two practices are (1) contracting with manufacturers to obtain rebates on WIC foods in addition to infant formula and (2) limiting authorized food selections by, for example, requiring participants to select brands of foods that have the lowest cost. With respect to rebates, nine state agencies received $6.2 million in rebates in fiscal year 1996 through individual or multistate contracts for two WIC-approved foods—infant cereal and/or infant fruit juices. Four of these state agencies and seven other state agencies—a total of 11 states—reported that they were considering, or were in the process of, expanding their use of rebates to foods other than infant formula. In May 1997, Delaware, one of the 11 states, joined the District of Columbia, Maryland, and West Virginia in a multistate rebate contract for infant cereal and juices. Another state, California, was the first state to expand its rebate program in March 1997 to include adult juices. California spends about $65 million annually on adult juice purchases. California’s WIC director told us that the state expects to collect about $12 million in annual rebates on the adult juices, thereby allowing approximately 30,000 more people to participate in the program each month. With respect to placing limits on food selections, all of the 48 state WIC directors responding to our survey reported that their agencies imposed limits on one or more of the food items eligible for program reimbursement. The states may specify certain brands; limit certain types of foods, such as allowing the purchase of block but not sliced cheese; restrict container sizes; and require the selection of only the lowest-cost brands. However, some types of restrictions are more widely used than others. For example, 47 WIC directors reported that their states’ participants are allowed to choose only certain container or package sizes of one or more food items, but only 20 directors reported that their states require participants to purchase the lowest-cost brand for one or more food items. While all states have one or more food selection restrictions, 17 of the 48 WIC directors responding to our questionnaire reported that their states are considering the use of additional limits on food selection to contain or reduce WIC costs. Separately or in conjunction with measures to contain food costs, we found that 39 state agencies have placed restrictions on their authorized retail outlets (food stores and pharmacies allowed to redeem WIC vouchers—commonly referred to as vendors) to hold down costs. For example, the prices for WIC food items charged by WIC vendors in Texas must not exceed by more than 8 percent the average prices charged by vendors doing a comparable dollar volume of business in the same area. Once selected, authorized WIC vendors must maintain competitive prices. According to Texas WIC officials, the state does not limit the number of vendors that can participate in WIC. However, Texas’ selection criteria for approving a vendor excludes many stores from the program. In addition, 18 WIC directors reported that their states restrict the number of vendors allowed to participate in the program by using ratios of participants to vendors. For example, Delaware used a ratio of 200 participants per store in fiscal year 1997 to determine the total number of vendors that could participate in the program in each WIC service area. By limiting the number of vendors, states can more frequently monitor vendors and conduct compliance investigations to detect and remove vendors from the program who commit fraud or other serious program violations, according to federal and state WIC officials. A July 1995 report by USDA’s Office of Inspector General found that the annual loss to WIC as a result of vendor fraud in one state could exceed $3 million. The WIC directors in 2 of the 39 states that reported limiting the number of vendors indicated that they are planning to introduce additional vendor initiatives, such as selecting vendors on the basis of competitive food pricing. We also found that opportunities exist to substantially lower the cost of special infant formula. Special formula, unlike the regular formula provided by WIC, is provided to infants with special dietary needs or medical conditions. Cost savings may be achieved if the states purchase special infant formula at wholesale instead of retail prices. The monthly retail cost of these special formulas can be high—ranging in one state we surveyed from $540 to $900 for each infant. These high costs occur in part because vendors’ retail prices are much higher than the wholesale cost. Twenty-one states avoid paying retail prices by purchasing the special formula directly from the manufacturers and distributing it to participants. For example, Pennsylvania turned to the direct purchase of special infant formula to address the lack of availability and high cost of vendor-provided formulas. It established a central distribution warehouse for special formulas in August 1996 to serve the less than 1 percent of WIC infants in the state—about 400—who needed special formula in fiscal year 1996. The program is expected to save about $100,000 annually. Additional savings may be possible if these 21 states are able to reduce or eliminate the authorization and monitoring costs of retail vendors and pharmacies that distribute only special infant formula. For example, by establishing its own central distribution warehouse, Pennsylvania plans to remove over 200 pharmacies from the program, resulting in significant administrative cost savings, according to the state WIC director. While the use of these cost containment practices could be expanded, our work found that a number of obstacles may discourage the states from adopting or expanding these practices. These obstacles include problems that states have with existing program restrictions on how additional funds made available through cost containment initiatives can be used and resistance from the retail community when states attempt to establish selection requirements or limit retail stores participating in the program. First, FNS policy requires that during the grant year, any savings from cost containment accrue to the food portion of the WIC grant, thereby allowing the states to provide food benefits to additional WIC applicants. None of the cost savings are automatically available to the states for support services, such as staffing, clinic facilities, voucher issuance sites, outreach, and other activities that are funded by WIC’s NSA (Nutrition Services and Administration) grants. These various support activities are needed to increase participation in the program, according to WIC directors. As a result, the states may not be able to serve more eligible persons or they may have to carry a substantial portion of the program’s support costs until the federal NSA grant is adjusted for the increased participation level—a process that can take up to 2 years, according to the National Association of WIC Directors. FNS officials pointed out that provisions in the federal regulations allow the states that have increased participation to use a limited amount of their food grant funds for support activities. However, some states may be reluctant to use this option because, as one director told us, doing so may be perceived as taking food away from babies. FNS and some state WIC officials told us that limiting the number of vendors in the program is an important aspect of containing WIC costs. However, they told us the retail community does not favor limits on the number of vendors that qualify to participate. Instead, the retail community favors the easing of restrictions on vendor eligibility thereby allowing more vendors that qualify to accept WIC vouchers. According to FNS officials, the amount that WIC spends for food would be substantially higher if stores with higher prices were authorized to participate in the program. To encourage the further implementation of WIC cost containment practices, we recommended in our September 1997 report that FNS work with the states to identify and implement strategies to reduce or eliminate such obstacles. These strategies could include modifying the policies and procedures that allow the states to use cost containment savings for the program’s support services and establishing regulatory guidelines for selecting vendors to participate in the program. FNS concurred with our findings and recommendations. We will continue to monitor the agency’s progress made in implementing strategies to reduce or eliminate obstacles to cost containment. Our survey also collected information on the practices that the states are using to ensure that program participants meet the program’s income and residency requirements. The states’ requirements for obtaining income documentation vary. Of the 48 WIC directors responding to our survey, 32 reported that their state agencies generally require applicants to provide documentation of income eligibility; 14 reported that their states did not require documentation and allowed applicants to self-declare their income; and 2 reported that income documentation procedures are determined by local WIC agencies. Of the 32 states requiring income documentation, 30 reported that their documentation requirement could be waived under certain conditions. Our review of state income documentation polices found that waiving an income documentation requirement can be routine. For example, we found that some states requiring documentation of income will waive the requirement and permit self-declaration of income if the applicants do not bring income documents to their certification meeting. While existing federal regulations allow the states to establish their own income documentation requirements for applicants, we are concerned that basing income eligibility on the applicants’ self-declarations of income may permit ineligible applicants to participate in WIC. However, the extent of this problem is unknown because there has not been a recent study of the number of program participants who are not eligible because of income. Information from a study that FNS has begun should enable that agency to determine whether changes in states’ requirements for income documentation are needed. Regarding residency requirements, we found that some states have not been requiring proof of residency and personal identification for program certification, as required by federal regulations. In our September 1997 report, we recommended that FNS take the necessary steps to ensure that state agencies require participants to provide identification and evidence that they reside in the states where they receive benefits. In February 1998, FNS issued a draft policy memorandum to its regional offices that is intended to stress the continuing importance of participant identification, residency, and income requirements and procedures to ensure integrity in the certification and food instrument issuance processes. Also, at the request of FNS, we presented our review’s findings and recommendations at the EBT and Program Integrity Conference jointly sponsored by the National Association of WIC Directors and FNS in December 1997. The conference highlighted the need to reduce ineligible participation and explored improved strategies to validate participants’ income and residency eligibility. FNS requires the states to operate a rebate program for infant formula. By negotiating rebates with manufacturers of infant formula purchased through WIC, the states greatly reduce their average per person food costs so that more people can be served. At the request of the Chairman of the House Budget Committee, we are currently reviewing the impacts that these rebates have had on non-WIC consumers of infant formula. Specifically, we will report on (1) how prices in the infant formula market changed for non-WIC purchasers and WIC agencies after the introduction of sole-source rebates, (2) whether there is any evidence indicating that non-WIC purchasers of infant formula subsidized WIC purchases through the prices they paid, and (3) whether the significant cost savings for WIC agencies under sole source rebates for infant formula have implications for the use of rebates for other WIC products. Thank you again for the opportunity to appear before you today. We would be pleased to respond to any questions you may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed its completed reviews of the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC), focusing on the: (1) reasons that states had for not spending all of their federal grant funds; (2) efforts of WIC agencies to improve access to WIC benefits for working women; and (3) various practices states use to lower the costs of WIC and ensure that the incomes of WIC applicants' meet the program's eligibility requirements for participation. GAO noted that: (1) states had unspent WIC funds for a variety of reasons; (2) in fiscal year 1996, these funds totalled about $121.6 million, or about 3.3 percent of that year's $3.7 billion WIC grant; (2) some of these reasons were associated with the way WIC is structured; (3) virtually all the directors of local WIC agencies report that their clinics have taken steps to improve access to WIC benefits for working women; (4) the two most frequently cited strategies are: (a) scheduling appointments instead of taking participants on a first-come, first-served basis; and (b) allowing a person other than the participant to pick up food vouchers or checks, as well as nutrition information, and to pass these benefits on to the participant; (5) the states are using a variety of cost containment initiatives that have saved millions of dollars annually for WIC and enabled more individuals to participate in the program; and (6) some of these initiatives include obtaining rebates on WIC foods, limiting participants' food choices to lowest-cost items, and limiting the number of stores that participate in WIC.
Since the early 1990s, increasing computer interconnectivity—most notably growth in the use of the Internet—has revolutionized the way that our government, our nation, and much of the world communicate and conduct business. The benefits have been enormous, but without proper safeguards in the form of appropriate information security, this widespread interconnectivity also poses significant risks to the government’s computer systems and the critical operations and infrastructures they support. In prior reviews we have repeatedly identified weaknesses in almost all areas of information security controls at major federal agencies, including VA, and we have identified information security as a high risk area across the federal government since 1997. In July 2005, we reported that pervasive weaknesses in the 24 major agencies’ information security policies and practices threatened the integrity, confidentiality, and availability of federal information and information systems. As we reported, although federal agencies showed improvement in addressing information security, they also continued to have significant control weaknesses that put federal operations and assets at risk of inadvertent or deliberate misuse, financial information at risk of unauthorized modification or destruction, sensitive information at risk of inappropriate disclosure, and critical operations at risk of disruption. These weaknesses existed primarily because agencies had not yet fully implemented strong information security programs, as required by the Federal Information Security Management Act (FISMA). The significance of these weaknesses led us to conclude in the audit of the federal government’s fiscal year 2005 financial statements that information security was a material weakness. Our audits also identified instances of similar types of weaknesses in nonfinancial systems. Weaknesses continued to be reported in each of the major areas of general controls: that is, the policies, procedures, and technical controls that apply to all or a large segment of an entity’s information systems and help ensure their proper operation. To fully understand the significance of the weaknesses we identified, it is necessary to link them to the risks they present to federal operations and assets. Virtually all federal operations are supported by automated systems and electronic data, without which agencies would find it difficult, if not impossible, to carry out their missions and account for their resources. The following examples show the broad array of federal operations and assets placed at risk by information security weaknesses: ● Resources, such as federal payments and collections, could be lost or stolen. ● Computer resources could be used for unauthorized purposes or to launch attacks on others. ● Personal information, such as taxpayer data, social security records, and medical records, and proprietary business information could be inappropriately disclosed, browsed, or copied for purposes of identity theft, industrial espionage, or other types of crime. ● Critical operations, such as those supporting national defense and emergency services, could be disrupted. ● Data could be modified or destroyed for purposes of fraud, theft of assets, or disruption. ● Agency missions could be undermined by embarrassing incidents that result in diminished confidence in their ability to conduct operations and fulfill their fiduciary responsibilities. The potential disclosure of personal information raise identity theft and privacy concerns. Identity theft generally involves the fraudulent use of another person’s identifying information— such as Social Security number, date of birth, or mother’s maiden name—to establish credit, run up debt, or take over existing financial accounts. According to identity theft experts, individuals whose identities have been stolen can spend months or years and thousands of dollars clearing their names. Some individuals have lost job opportunities, been refused loans, or even been arrested fo crimes they did not commit as a result of identity theft. The Feder Trade Commission (FTC) reported in 2005 that identity theft represented about 40 percent of all the consumer fraud complaints it received during each of the last 3 calendar years. Beyond the serious issues surrounding identity theft, the unauthorized disclosure of personal information also represents a breach of individuals’ privacy rights to have control over their own information and to be aware of who has access to this information. Federal agencies are subject to security and privacy laws aimed in part at preventing security breaches, including breaches that could enable identity theft. FISMA is the primary l federal government; it also addresses the protection of personal n information in the context of securing federal agency informatio r and information systems. The act defines federal requirements fo securing information and information systems that support federalaw governing information security in the agency operations and assets. Under FISMA, agencies are requiredto provide sufficient safeguards to cost-effectively protect their information and information systems from unauthorized access, use disclosure, disruption, modification, or destruction, including controls necessary to preserve authorized restrictions on access and disclosure (and thus to protect personal privacy, among other things). The act requires each agency to develop, document, and implement an agencywide information security program to provide , security for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source. FISMA describes a comprehensive information security pr including the following elements: periodic assessments of the risk an result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information or information systems; d magnitude of harm that could risk-based policies and procedures that cost-effectively reduce ris to an acceptable level and ensure that security is addressed throughout the life cycle of each information system; security awareness training for agency personnel, including contractors and other users of information systems th operations and assets of the agency; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices; ● a process for planning, implementing, evaluating, and documentingremedial action to address any deficiencies th and milestones; and procedures for detecting, reporting, and responding to security incidents. In particula agencies evaluate the associated risk according to three categories: (1) confidentiality, which is the risk associated with unauthorized disclosure of the information; (2) integrity, the risk of unauthorized modification or destruction of the information; and (3) availability, which is the risk of disruption of access to or use of information. Thus, each agency should assess the risk associated with personal data held by the agency and develop appropriate protections. r, FISMA requires that for any information they hold, The agency can use this risk assessment to determine the appropriate controls (operational, technical, and managerial) th will reduce the risk to an acceptably low level. For exampl agency assesses the confidentiality risk of the personal information as high, the agency could create control mechanisms to help prote ct the data from unauthorized disclosure. Besides appropriate policies,at e, if an these controls would include access controls and monitoring systems: Access con confidentiality of information. Organizations use these controls to grant employees the authority to read or modify only the information the employees need to perform their duties. In addition access controls can limit the activities that an employee c perform on data. For example, an employee may be given the right to read data, but not to modify or copy it. Assignment of right s and permissions must be carefully considered to avoid giving users unnecessary access to sensitive files and directories. trols are key technical controls to protect the To ensure that controls are, in fact, implemented and that no violations have occurred, agencies need to monitor compliance with security policies and investigate security violations. It is crucial to determine what, when, and by whom specific actions are taken on a system. Organizations accomplish this by implementing system or security software that provides an audit trail that they can use to determine the source of a transaction or attempted transaction and to monitor users’ activities. The way in which organizations configure system or security software determines the nature and extent of information that can be provided by the audit trail. To be effective, organizations should configure their software to collect and maintain audit trails that are sufficient to track security events. A comprehensive security program of the type described is a prerequisite for the protection of personally identifiable information held by agencies. In addition, agencies are subject to requirem specifically related to personal privacy protection, which come primarily from two laws, the Privacy Act of 1974 and the E- Government Act of 2002. The Privacy Act places lim disclosure, and use of personal information maintained in systems of records. The act describes a “record” as any item, collect ion, or grouping of information about an individual that is maintained by an agency and contains his or her name or another personal identifier.itations on agencies’ collection, It also defines “system of records” as a group of records under the control of any agency from which information is retrieved by the name of the individual or by an individual identifier. The Privacy Ac t requires that when agencies establish or make changes to a system of records, they must notify the public by a “system-of-records notice”: that is, a notice in the Federal Register identifying, among other things, the type of data collected, the types of individuals about whom information is collected, the intended “routine” uses o data, and procedures that individuals can use to review and corr personal information. Among other provisions, the act also requires agencies to define and limit themselves to specific predefined purposes. The provisions of the Privacy Act are consistent with and large based on a set of principles for protecting the privacy and security of personal information, known as the Fair Information Practices, which have been widely adopted as a standard benchmark for evaluating the adequacy of privacy protections; they include such principles as openness (keeping the public informed about privacy policies and practices) and accountability (those controlling the collection or use of personal information should be accountable for taking steps to ensure the implementation of these principles). The E-Government Act of 2002 strives to enhance protection for personal information in government information systems by requiring that agencies conduct privacy impact assessments (PIA PIA is an analysis of how personal information is collected, st shared, and managed in a federal system. More specifically, according to OMB guidance, a PIA is to (1) ensure that handling conforms to applicable legal, regulatory, and policy requirem ents regarding privacy; (2) determine the risks and effects of collecting maintaining, and disseminating information in identifiable form in , an electronic information system; and (3) examine and evaluate protections and alternative processes for handling information to mitigate potential privacy risks. To the extent that PIAs are made publicly available, they provide explanations to the public about such things as the information that will be collected, why it is bein collected, how it is to be used, and how the system and data will b maintained and protected. Federal laws to date have not required breaches to the public, although breac important role in the context of security breaches in the private sector. For example, requirements of California state law led ChoicePoint, a large information reseller, to notify its customer a security breach in February 2005. Since the ChoicePoint notification, bills were introduced in at least 44 states and enacted in at least 29 that require some form of notification upon a breach. agencies to report security h notification has played an A numbe in 2005 in the wake of the ChoicePoint security breach as well as incidents at other firms. In March 2005, the House Subcommittee on Commerce, Trade, and Consumer Protection of the House Energy r of congressional hearings were held and bills introduced and Commerce Committee held a hearing entitled “Protecting Consumers’ Data: Policy Issues Raised by ChoicePoint,” which focused on potential remedies for security and privacy concern s regarding information resellers. Similar hearings were held by th House Energy and Commerce Committee and by the U.S. Senate Committee on Commerce, Science, and Transportation in spring 2005. In carrying out its mission of providing health care and benefits to veterans, VA relies on a vast array of computer systems and telecommunications networks to support its operations and store sensitive information, including personal information on veterans. VA’s networks are highly interconnected, its systems support many users, and the department has increasingly moved to more interactive, Web-based services to better meet the needs of its customers. Effectively securing these computer systems and networks is critical to the department’s ability to safeguard its assets, maintain the confidentiality of sensitive veterans’ health disability benefits information, and ensure the integrity of its financial data. In this complex IT environment, VA has faced long-standing challenges in achieving effective information security across the department. Our reviews identified wide-ranging, often recurring deficiencies in the department’s information security controls (attachment 2 provides further detail on our reports and the area weakness they discuss). Examples of areas of deficiency include th following. Access authority was not appropriately controlled. A basic management objective for any organization is to protect the resources that support its critical operations from unauthorized access. Electronic access controls are intended to prevent, limit, and detect unauthorized access to computing resources, prog and information and include controls related to user accounts and passwords, user rights and file permissions, logging and monitori of security-relevant events, and network management. Inadequate controls diminish the reliability of computerized information and increase the risk of unauthorized disclosure, modification, and destruction of sensitive information and disruption of service. However, VA had not established effective electronic access controls to prevent individuals from gaining unauthorized acces its systems and sensitive data, as the following examples illustra ● User accounts and passwords: In 1998, many user accounts at four VA medical centers and data centers had weaknesses including passwords that could be easily guessed, null passwords, and passwords that were set to never expire. We also found numerous instances where medical and data center staff members were sharing user IDs and passwords. ● User rights and permissions: We reported in 2000 that three VA health care systems were not ensuring that user accounts w broad access to financial and sensitive veteran informa proper authorization for such access, and were not reviewing these accounts to determine if their level of access remained appropriate. ● Logging and monitoring of security-related events: In 1998, VA did not have any departmentwide guidance for monitoring both successful and unsuccessful attempts to access system files containing key financial information or sensitive veteran data, and none of the medical and data centers we visited were actively monitoring network access activity. In 1999, we found that one data center was monitoring failed access attempts, but was not monitoring successful accesses to sensitive data and resources for unusual or suspicious activity. Network management: In 2000, we reported that one of the health care systems we visited had not configured a network parameter to effectively prevent unauthorized access to a network system; this same health care system had also failed to keep its network system software up to date. Physical security controls were inadequate. Physical security co resources from espionage, sabotage, damage, and theft. These co lim are housed and by periodically reviewing the access granted, in order to ensure that access continues to be appropriate. VA hadntrols are important for protecting computer facilities and ntrols restrict physical access to computer resources, usually by iting access to the buildings and rooms in whic weaknesses in the physical security for its computer facilities example, in our 1998 and 2000 reports, we stated that none of th facilities we visited were adequately controlling access to their computer rooms. In addition, in 1998 we reported that sensitive equipment at two facilities was not adequately protected, increas the risk of disruption to computer operations or network communications. ing Employees were not prevented from performing incompatible duties. Segregation of duties refers to the policies, procedures, a organizational structures that help ensure that one individual can independently control all key aspects of a process or computer- related operation. Dividing duties among two or more indi organizational grouviduals or wps diminishes the likelihood that errors and individual or group will serve as a check on the activities of the other. We determined that VA did not assign employee duties and responsibilities in a manner that segregated incompatible functions among individuals or groups of individuals. For example, in 1998 reported that some system programmers also had security administrator privileges, giving them the ability to eliminate anyrongful acts will go undetected, because the activities of one evidence of their activity in the system. In 2000, we reported tha t two VA health care systems allowed some employees to request,approve, and receive medical items without management approval , violating both basic segregation of duties principles and VA policy; in addition, no mitigating controls were found to alert management of purchases made in this manner. d independently reviewed. We ound that VA did not adequately control changes to its operating Software change control procedures were not consistently implemented. It is important to ensure that only authorized and fu tested systems are placed in operation. To ensure that changes to systems are necessary, work as intended, and do not result in the loss of data or program integrity, such changes should be documented, authorized, tested, an f systems. For example, in 1998 we reported that one VA data center had not established detailed written procedures or formal guidance for modifying operating system software, for approving and testing operating system software changes, or for implementing these changes. The data center had made more than 100 system softwar changes during fiscal year 1997, but none of the changes included evidence of testing, independent review, or acceptance. We report in 2000 that two VA health care systems had not established procedures for periodically reviewing changes to standard application programs to ensure that only authorized program code was implemented. ed Service continuity planning was not complete. In addition to protecting data and programs from misuse, organizations mu ensure that they are adequately prepared to cope with a loss operational capability due to earthquakes, fires, accidents, sabotage or any other disrup c continuity plan. Such a plan is critical for helping to ensure that information system operations and data can be promptly restored in the event of a disaster. We reported that VA had not completed o tested service continuity plans for several systems. For example, in 1998 we reported that one VA data center had 17 individual disastertion. An essential element in preparing for such atastrophes is an up-to-date, detailed, and fully tested service recovery plans covering various segments of the organization, but itdid not have an overall document that integrated the 17 separate plans and defined the roles and responsibilities for the disaster recovery teams. In 2000, we determined that the service continuity plans for two of the three health care systems we visited did not include critical elements such as detailed recovery procedures, provisions for restoring mission-critical systems, and a list of key contacts; in addition, none of the health care systems we visited were fully testing their service continuity plans. These deficiencies existed, in part, because VA had not implemen key components of a comprehensive computer security program . Specifically, VA’s computer security efforts lacked ● clearly delineated security roles and responsib regular, periodic assessments of risk; ● security policies and procedures that addressed all aspects of VA’s interconnected environment; ● an ongoing security monitoring program to identi investigate unauthorized, unusual, or suspicious access activity; and ● a process to measure, test, and report effectiveness of computer system, network, and process controls. As a result, we made a number of recommendations in 2002 were aimed at improving VA’s security management. primary elements of these recommendations were that (1) VA centralize its security management functions and (2) it perform other actions to establish an information security program, including actions related to risk assessments, security policies procedures, security awareness, and monitoring and evaluating computer controls. GAO, Veterans Affairs: Sustained Management Attention Is Key to Achieving Information Technology Results, GAO-02-703 (Washington, D.C.: June 12, 2002). security policies and procedures. However, the department still needed to develop policy and guidance to ensure (1) authority and independence for security officers and (2) departmentwide coordination of security functions. Periodic risk assessments: VA is implementing a commercial too identify the level of risk associated with system changes and also to conduct information security risk assessments. It also created a methodology that establishes minimum requirements for such risk assessments. However, it has not yet completed its risk assessment policy and guidance. VA reported that such guidance was forthcoming as part of an overarching information system security certification and accreditation policy that was to be developed during 2006. Without these elements, VA cannot be assured that it is appropriately performing risk assessments departmentwide. Security policies and procedures: VA’s cyber security officer reported that VA has action ongoing to develop a process for collecting and tracking performance data, ensuring management action when needed, and providing independent validation of reported issues. VA also has ongoing efforts in the area of dete reporting, and responding to security incidents. For example, it established network intrusion prevention capability at its four enterprise gateways. It is also developing strategic and tactical to complete a security incident response program to monitor suspicious activity and cyber alerts, events, and incidents. Howe these plans are not complete. Security awareness: VA has taken steps to improve security awareness training. It holds an annual department information security conference, and it has developed a Web portal for security training, policy, and procedures, as well as a security awareness course that VA employees are required to review annually. Ho VA has not demonstrated that it has a process to ensure complia wever, nce. Monitoring and evaluating computer controls: VA established a process to better monitor and evaluate computer controls by tracking the status of security weaknesses, corrective actions taken, and independent validations of corrective actions through a software data base. However, more remains to be done in this area. For example, although certain components of VA reported vulnerability and penetration testing to evaluate controls on and external access to VA systems, this testing was not part of an ongoing departmentwide program. ince our last report in 2002, VA’s IG and independent auditors have S continued to report serious weaknesses with the department’s information security controls. The auditors’ report on internal controls, prepared at the completion of VA’s 2005 financial statement audit, identified weaknesses related to access cont rol, segregation of duties, change control, and service continuity—a li of weaknesses that are virtually identical to those we identified years earlier. The department’s FY 2005 Annual Performance an Accountability Report states that the IG determined that many information system security vulnerabilities reported in national audits from 2001 through 2004 remain unresolved, despite the department’s actions to implement IG recommendations in pre audits. The IG also reported specific security weaknesses and vulnerabilities at 45 of 60 VA health care facilities and 11 of 21 VA t regional offices where security issues were reviewed, placing VA a risk that sensitive data may be exposed to unauthorized access and improper disclosure, among other things. As a result, the IG determined that weaknesses in VA’s information technology controls were a material weakness. In response to the IG’s findings, the department indicates that plans are being implemented to address the material weakness in information security. According to the department, it has ma limited resources to make significant improvement in its overall security posture in the near term by prioritizing FISMA remediation activities, and work will continue in the next fiscal year. Despite these actions, the department has not fully implemented the key elements of a comprehensive security management program, and its efforts have not been sufficient to effectively protect its information systems and information, including personally identifiable information, from unauthorized disclosure, misuse, or loss. In addition to establishing a robust information security program, agencies can take other actions to help guard against the possibil that personal information they maintain is inadvertently compromised. These include conducting privacy impact assessments and taking other practical measures. It is important that agencies identify the specific instance they collect and maintain personal information and proactively assess the means they intend to use to protect this information. This can be done most effectively through the development of privacy impact assessments (PIAs), which, as previously mentioned, are required by the E-Government Act of 2002 when agencies use information technology to process personal information. PIAs are important because they serve as a tool for agencies to fully consid the privacy implications of planned systems and data collections before those systems and collections have been fully implemen when it may be relatively easy to make critical adjustments. In prior work we have found that agencies do not always conduct PIAs as they are required. For example, our review of selected data mining efforts at federal agencies determined that PIAs were not always being done in full compliance with OMB guidance. Similarly, as identified in our work on federal agency use of information resellers, few PIAs were being developed for systems or programs that made use of information reseller data, because officials did not believe they were required. Complete assessments are an important tool for agencies to identify areas of noncompliance with federal privacy laws, evaluate risks arising from electronic collection and maintenance of information about individuals, and evaluate protections or alternative processes needed to mitigate the risks identified. Agencies that do not take all the steps required to the privacy of personal information risk the improper exposure o alteration of such information. We recommended that the agencies responsible for the data mining efforts we reviewed complete or revise PIAs as needed and make them available to the public. We also recommended that OMB revise its guidance to clarify the applicability of the E-Gov Act’s PIA requirement to the use of personal information from resellers. OMB stated that it would discuss its guidance with agency senior officials for privacy todetermine whether additional guidance concerning reseller dat needed. Besides strategic approaches suc security program and conducting range of specific practical measures for protecting the privacy andh as establishing an information PIAs, agencies can consider a r security of personal information. Several that may be of particula value in preventing inadvertent data breaches include the following Limit collection of personal information. One item to be analyzed as part of a PIA is the extent to which an agency needs to collect personal information in order to meet the requirements of a specific application. Limiting the collection of personal information, amon g other things, serves to limit the opportunity for that information to be compromised. For example, key identifying information—such as Social Security numbers—may not be needed for many agency applications that have databases of other personal information. Limiting the collection of personal information is also one of the information practices, which are fundamental to the Privacy Act to good privacy practice in general. Limit data retention. Closely related to limiting data collection is limiting retention. Retaining personal data longer than needed by an agency or statutorily required adds to the risk that the data will be compromised. In discussing data retention, California’s Office of Privacy Protection recently reported an example in which a university experienced a security breach that exposed 15-year-old data, including Social Security numbers. The university subsequently reviewed its policies and decided to shorten the retention period for certain types of information. As part of their PIAs, federal agencies can make decisions up front about how lon they plan to retain personal data, aiming to retain the data for as brief a period as necessary. Limit access to personal information and train personnel accordingly. Only individuals with a need to access agency databases of personal information should have such access, and controls should be in place to monitor that access. Further, agenc ies can implement technological controls to prevent personal data from being readily transferred to unauthorized systems or media, such as laptop computers, discs, or other electronic storage devices. Security training, which is required for all federal employees under FISMA, can include training on the risks of exposing personal dat a to potential identity theft, thus helping to reduce the likelihood of data being exposed inadvertently. Consider using technological controls such as encryption wh data need to be stored on portable devices. In certain instances, agencies may find it necessary to enable employees to have access to personal data on portable devices such as laptop computers. As discussed, this should be minimized. However, when absolutely necessary, the risk that such data could be exposed to unauthorized individuals can be reduced by using technological controls such as encryption, which significantly limits the ability of such individuals to gain access to the data. Although encrypting data adds to the operational burden on authorized individuals, who must enter pass codes or use other authentication means to convert the data into readable text, it can provide reasonable assurance that stolen or lost computer equipment will not result in personal data being compromised, as occurred in the recent incident at VA. A decision about whether to use encryption would logically be made as an en element of the PIA process and an agency’s broader information security program. While these suggestions do not amount to a complete presc ription for protecting personal data, they are key elements of an agency’s strategy for reducing the risks that could lead to identity theft. In the event a data breach does occur, agencies must respond quickly in order to minimize the potential harm associated with identity theft. The chairman of the Federal Trade Commission has testified that the Commission believes that if a security breach creates a significant risk of identity theft or other related harm, affected consumers should be notified. The Federal Trade Commission has also reported that the overall cost of an incide identity theft, as well as the harm to the victims, is significantly smaller if the misuse of the victim’s personal information is discovered quickly. Applicable laws such as the Privacy Act currently do not req agencies to notify individuals of security breaches involving their personal information; however, doing so allows those affected th opportunity to take steps to protect themselves against the d of identity theft. For law is credited with bringing to the public’s notice large data breaches within the private sector, such as those involving ChoicePoint and LexisNexis last year. Arguably, the California lawexample, California’s data breach notification may have mitigated the risk of identity theft to affected individuals by keeping them informed about data breaches and thus enabling them to take steps such as contacting credit bureaus to have fraud alerts placed on their credit files, obtaining copies of their credit reports, scrutinizing their monthly financial account statem taking other steps to protect themselves. Breach notification is also important in that it can help an organization address key privacy rights of individuals, in accordanc with the fair information practices mentioned earlier. Breach notification is one way that organizations—either in the private sector or the government—can follow the openness principle and meet their responsibility for keeping the public informed of how their personal information is being used and who has access to it. Equally important, notification is consistent with the principle that those controlling the collection or use of personal information should be accountable for taking steps to ensure the implementa of the other principles, such as use limitation and security safeguards. Public disclosure of data breaches is a key step in ensuring that organizations are held accountable for the protection of personal information. Although the principle of notifying affected individuals (or the public) about data breaches has clear benefits, determining the specifics of when and how an agency should issue such notifications presents challenges, particularly in determining the specific criteria for incidents that merit notification. In congressional testim ony, the Federal Trade Commission raised concerns about the thres hold at which consumers should be notified of a breach, cautioning tha strict a standard could have several negative effects. First, notification of a breach when there is little or no risk of harm might create unnecessary concern and confusion. Second, a surfeit of notices, resulting from notification criteria that are too strict, could render all such notices less effective, because consumers could become numb to them and fail to act when risks are truly significant. Finally, the costs to both individuals and business are t too not insignificant and may be worth considering. FTC points out that, in response to a security breach notification, a consumer may ca credit cards, contact credit bureaus to place fraud alerts on credit files, or obtain a new driver’s license number. These actions cou ld panies be time-consuming for the individual and costly for the com involved. Given these potential negative effects, care is clearly needed in defining appropriate criteria for required breach notifications. Once a determination has been made that a public notice is to be issued, care must be taken to ensure that it does its job effectively. Designing useful, easy-to-understand notices has been cited a in other areas where privacy notices are required by law, challenge such as in the financial industry—where businesses are required by the Gramm-Leach-Bliley Act to send notices to consumers ab out their privacy practices—and in the federal government, which is required by the Privacy Act to issue public notices in the Federal Register about its systems of records containing personal information. For example, as noted during a public workshop hosted by the Department of Homeland Security’s Privacy Office, designing easy-to-understand consumer financial privacy notices to meet Gramm-Leach Bliley Act requirements has been challenging Officials from the FTC and Office of the Comptroller of the Currency described widespread criticism of these notices—that t were unexpected, too long, filled with legalese, and not understandable. . If an agency is to notify people of a data breach, it should do so in such a way that they understand the nature of the threat and what steps need to be taken to protect themselves against identity theft. In connection with its state law requiring security breach notifications, the California Office of Privacy Protection has published recommended practices for designing and issuing security breach notices. The office recommends that such notifications include, among other things, a general description of what happened; the type of personal information that was involved; what steps have been taken to prevent further unauthorized acquisition of personal information; the types of assistance to be p free contact telephone number for additional information and assistance; rovided to individuals, such as a toll- information on what individuals can do to protect th identity theft, including contact information for the three cred reporting agencies; and it information on where individuals can obtain additional information on protection against identity theft, such as the Federal Trade Commission’s Identity Theft Web site (www.consumer.gov/idtheft). The California Office of Privacy Protection also recommends making notices clear, conspicuous, and helpful by using clear, simple language and avoiding jargon, and it suggests avoiding using a standardized format to mitigate the risk that the public will become complacent about the process. The Federal Trade Commission has issued guidance to businesses on notifying individuals of data breaches that reiterates several key elements of effective notification—describing clearly what is known about the data compromise, explaining what responses may be appropriate for the type of information taken, and providing information and contacts regarding identity theft in general. The Commission also suggests providing contact information for the law enforcement officer working on the case, as well as encouraging individuals who discover that their information has been misused to file a complaint with the Commission. Both the state of California and the Federal Trade Commission recommend consulting with cognizant law-enforcement officers about an incident before issuing notices to the public. In some cases, early notification or disclosure of certain facts about an incident could hamper a law enforcement investigation. For example, an otherwise unknowing thief could learn of the potential value of data stored on a laptop computer that was originally stolen purely for the value of the hardware. Thus it is recommended tha organizations consult with law enforcement regarding the timing and content of notifications. However, law enforcement investigations should not necessarily result in lengthy delays in notification. California’s guidance states that it should not be necessary for a law enforcement agency to complete an investigation before notification can be given. t When providing notifications to the public, organizations should consider how to ensure that these are easily understood. Various techniques have been suggested to promote comprehension, including the concept of “layering.” Layering involves providing only the most important summary facts up front—often in a graphical format—followed by one or more lengthier, more narrative versions in order to ensure that all information is communicated that needs to be. Multilayering may be an option to achieving an easy-to-understand notice that is still complete. Similarly, providing context to the notice (explaining to consu why they are receiving the notice and what to do with it) has been found to promote comprehension, as did visual design elem such as a tabular format, large and legible fonts, appropriate space, and simple headings. Although these techniques were developed for other kinds of notices, they can be applied to those informing the public of data breaches. For example, a multilayered security breach notice could include a brief description of the nature of the security breach, thepotential threat to victims of the incident, and measures to be taken to protect against identity theft. The notice could provide additional details about the incident as an attachment or by providing links to additional information. This would accomplish the purpose of communicating the key details in a brief format, while still providi ng complete information to those who require it. Given that people maybe adversely affected by a compromise of their personal information, it is critical that they fully understand the nature of the threat and the options they have to address it. In summary, the recent security breach at VA has highlighted the importance of implementing effective information security practices. Long-standing information security control weaknesses at VA have placed its information systems and information, including personally identifiable information, at increased risk of misuse and unauthorized disclosure. Although VA has taken steps to mitigate previously reported weaknesses, it has not implemented a comprehensive, integrated information security program, which it needs in order to effectively manage risks on an ongoing basis. Much work remains to be done. Only through strong leadership, sustained management commitment and effort, disciplined processes, and consistent oversight can VA address its persistent, long-standing control weaknesses. To reduce the likelihood of experiencing such breaches, agencies can take a number of actions that can help guard against the possibility that databases of personally identifiable information are inadvertently compromised: strategically, they should ensure that a robust information security program is in place and that PIAs are developed. More specific practical measures aimed at preventing inadvertent data breaches include limiting the collection of pe information, limiting data retention, limiting access to personal information and training personnel accordingly, and considering using technological controls such as encryption when data need to be stored on mobile devices. Nevertheless, data breaches can still occur at any time, and whe n they do, notification to the individuals affected and/or the public h clear benefits, allowing people the opportunity to take steps to dangers of identity theft. Care is protect themselves against the needed in defining appropriate criteria if agencies are to be required to report security breaches to the public. Further, care is also needed to ensure that notices are useful and easy to understand, so that they are effective in alerting individuals to actions they may want to take to minimize the risk of identity theft. We have previously testified that as Congress considers legisla requiring agencies to notify individuals or the public about security tion breaches, it should ensure that specific criteria are defined for incidents that merit public notification. It may want to consider creating a two-tier reporting requirement, in which all security breaches are reported to OMB, and affected individuals are notifiedonly of incidents involving significant risk. Further, Congress should consider requiring OMB to provide guidance to agencies on how develop and issue security breach notices to the public. Mr. Chairman, this concludes our testimony today. We would be happy to answer any questions you or other members of the committee may have. If you have any questions concerning this testimony, please contact Linda Koontz, Director, Information Management, at (202) 512-6240, koontzl@gao.gov, or Gregory Wilshusen, Director, Information Security, at (202) 512-6244, wilshuseng@gao.gov. Other individuals who made key contributions include Idris Adjerid, Barbara Collier, William Cook, John de Ferrari, Valerie Hopkins, Suzanne Lightman, Barbara Oliver, David Plocher, Jamie Pressman, J. Michael Resser, and Charles Vrabel. Information Systems: VA Computer Control Weaknesses Increase Risk of Fraud, Misuse, and Improper Disclosure. GAO/AIMD-98- 175. Washington, D.C.: September 23, 1998. VA Information Systems: The Austin Automation Center Has Made Progress in Improving Information System Controls. GAO/AIMD-99-161. Washington, D.C.: June 8, 1999. Information Systems: The Status of Computer Security at the Department of Veterans Affairs. GAO/AIMD-00-5. Washington, D.C.: October 4, 1999. VA Systems Security: Information System Controls at the North Texas Health Care System. GAO/AIMD-00-52R. Washington, D.C.: February 1, 2000. VA Systems Security: Information System Controls at the New Mexico VA Health Care System. GAO/AIMD-00-88R. Washington, D.C.: March 24, 2000. VA Systems Security: Information System Controls at the VA Maryland Health Care System. GAO/AIMD-00-117R. Washington, D.C.: April 19, 2000. Information Technology: Update on VA Actions to Implement Critical Reforms. GAO/T-AIMD-00-74. Washington, D.C.: May 11, 2000. VA Information Systems: Computer Security Weaknesses Persist at the Veterans Health Administration. GAO/AIMD-00-232. Washington, D.C.: September 8, 2000. Major Management Challenges and Program Risks: Department of Veterans Affairs. GAO-01-255. Washington, D.C.: January 2001. VA Information Technology: Important Initiatives Begun, Yet Serious Vulnerabilities Persist. GAO-01-550T. Washington, D.C.: April 4, 2001. VA Information Technology: Progress Made, but Continued Management Attention is Key to Achieving Results. GAO-02-369T. Washington, D.C.: March 13, 2002. Veterans Affairs: Subcommittee Post-Hearing Questions Concerning the Department’s Management of Information Technology. GAO-02-561R. Washington, D.C.: April 5, 2002. Veterans Affairs: Sustained Management Attention is Key to Achieving Information Technology Results. GAO-02-703. Washington, D.C.: June 12, 2002. VA Information Technology: Management Making Important Progress in Addressing Key Challenges. GAO-02-1054T. Washington, D.C.: September 26, 2002. Information Security: Weaknesses Persist at Federal Agencies Despite Progress Made in Implementing Related Statutory Requirements. GAO-05-552. Washington, D.C.: July 15, 2005. Privacy: Key Challenges Facing Federal Agencies. GAO-06-777T. Washington, D.C.: May 17, 2006. Personal Information: Agencies and Resellers Vary in Providing Privacy Protections. GAO-06-609T. Washington, D.C.: April 4, 2006. Personal Information: Agency and Reseller Adherence to Key Privacy Principles. GAO-06-421. Washington, D.C.: April 4, 2006. Data Mining: Agencies Have Taken Key Steps to Protect Privacy in Selected Efforts, but Significant Compliance Issues Remain. GAO- 05-866. Washington, D.C.: August 15, 2005. Aviation Security: Transportation Security Administration Did Not Fully Disclose Uses of Personal Information during Secure Flight Program Testing in Initial Privacy Notices, but Has Recently Taken Steps to More Fully Inform the Public. GAO-05- 864R. Washington, D.C.: July 22, 2005. Identity Theft: Some Outreach Efforts to Promote Awareness of New Consumer Rights are Under Way. GAO-05-710. Washington, D.C.: June 30, 2005. Electronic Government: Federal Agencies Have Made Progress Implementing the E-Government Act of 2002. GAO-05-12. Washington, D.C.: December 10, 2004. Social Security Numbers: Governments Could Do More to Reduce Display in Public Records and on Identity Cards. GAO-05-59. Washington, D.C.: November 9, 2004. Federal Chief Information Officers: Responsibilities, Reporting Relationships, Tenure, and Challenges, GAO-04-823. Washington, D.C.: July 21, 2004. Data Mining: Federal Efforts Cover a Wide Range of Uses, GAO-04- 548. Washington, D.C.: May 4, 2004. Privacy Act: OMB Leadership Needed to Improve Agency Compliance. GAO-03-304. Washington, D.C.: June 30, 2003. Data Mining: Results and Challenges for Government Programs, Audits, and Investigations. GAO-03-591T. Washington, D.C.: March 25, 2003. Technology Assessment: Using Biometrics for Border Security. GAO-03-174. Washington, D.C.: November 15, 2002. Information Management: Selected Agencies’ Handling of Personal Information. GAO-02-1058. Washington, D.C.: September 30, 2002. Identity Theft: Greater Awareness and Use of Existing Data Are Needed. GAO-02-766. Washington, D.C.: June 28, 2002. Social Security Numbers: Government Benefits from SSN Use but Could Provide Better Safeguards. GAO-02-352. Washington, D.C.: May 31, 2002. Full citations are provided in attachment 1. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The recent information security breach at the Department of Veterans Affairs (VA), in which personal data on millions of veterans were compromised, has highlighted the importance of the department's security weaknesses, as well as the ability of federal agencies to protect personal information. Robust federal security programs are critically important to properly protect this information and the privacy of individuals. GAO was asked to testify on VA's information security program, ways that agencies can prevent improper disclosures of personal information, and issues concerning notifications of privacy breaches. In preparing this testimony, GAO drew on its previous reports and testimonies, as well as on expert opinion provided in congressional testimony and other sources. For many years, significant concerns have been raised about VA's information security--particularly its lack of a robust information security program, which is vital to avoiding the compromise of government information, including sensitive personal information. Both GAO and the department's inspector general have reported recurring weaknesses in such areas as access controls, physical security, and segregation of incompatible duties. The department has taken steps to address these weaknesses, but these have not been sufficient to establish a comprehensive information security program. For example, it is still developing plans to complete a security incident response program to monitor suspicious activity and cyber alerts, events, and incidents. Without an established and implemented security program, the department will continue to have major challenges in protecting its information and information systems from security breaches such as the one it recently experienced. In addition to establishing robust security programs, agencies can take a number of actions to help guard against the possibility that databases of personally identifiable information are inadvertently compromised. A key step is to develop a privacy impact assessment--an analysis of how personal information is collected, stored, shared, and managed--whenever information technology is used to process personal information. In addition, agencies can take more specific practical measures aimed at preventing data breaches, including limiting the collection of personal information, limiting the time that such data are retained, limiting access to personal information and training personnel accordingly, and considering the use of technological controls such as encryption when data need to be stored on portable devices. When data breaches do occur, notification of those affected and/or the public has clear benefits, allowing people the opportunity to protect themselves from identity theft. Although existing laws do not require agencies to notify the public of data breaches, such notification is consistent with agencies' responsibility to inform individuals about how their information is being accessed and used, and it promotes accountability for privacy protection. That said, care is needed in defining appropriate criteria for triggering notification. Notices should be coordinated with law enforcement to avoid impeding ongoing investigations, and in order to be effective, notices should be easy to understand. Because of the possible adverse impact of a compromise of personal information, it is critical that people fully understand the threat and their options for addressing it. Strong leadership, sustained management commitment and effort, disciplined processes, and consistent oversight will be needed for VA to address its persistent, long-standing control weaknesses.
Interest in oil shale as a domestic energy source has waxed and waned since the early 1900s. In 1912, President Taft established an Office of Naval and Petroleum Oil Shale Reserves, and between 1916 and 1924, executive orders set aside federal land in three separate naval oil shale reserves to ensure an emergency domestic supply of oil. The Mineral Leasing Act of 1920 made petroleum and oil shale resources on federal lands available for development under the terms of a mineral lease, but large domestic oil discoveries soon after passage of the act dampened interest in oil shale. Interest resumed at various points during times of generally increasing oil prices. For example, the U.S. Bureau of Mines developed an oil shale demonstration project beginning in 1949 in Colorado, where it attempted to develop a process to extract the oil. The 1970s’ energy crises stimulated interest once again, and DOE partnered with a number of energy companies, spawning a host of demonstration projects. Private efforts to develop oil shale stalled after 1982 when crude oil prices fell significantly, and the federal government dropped financial support for ongoing demonstration projects. More recently, the Energy Policy Act of 2005 directed BLM to lease its lands for oil shale research and development. In June 2005, BLM initiated a leasing program for research, development, and demonstration (RD&D) of oil shale recovery technologies. By early 2007, it granted six small RD&D leases: five in the Piceance Basin of northwest Colorado and one in Uintah Basin of northeast Utah. The location of oil shale resources in these two basins is shown in figure 1. The leases are for a 10-year period, and if the technologies are proven commercially viable, the lessees can significantly expand the size of the leases for commercial production into adjacent areas known as preference right lease areas. The Energy Policy Act of 2005 directed BLM to develop a programmatic environmental impact statement (PEIS) for a commercial oil shale leasing program. During the drafting of the PEIS, however, BLM realized that, without proven commercial technologies, it could not adequately assess the environmental impacts of oil shale development and dropped from consideration the decision to offer additional specific parcels for lease. Instead, the PEIS analyzed making lands available for potential leasing and allowing industry to express interest in lands to be leased. Environmental groups then filed lawsuits, challenging various aspects of the PEIS and the RD&D program. Since then, BLM has initiated another round of oil shale RD&D leasing and is currently reviewing applications but has not made any awards. Stakeholders in the future development of oil shale are numerous and include the federal government, state government agencies, the oil shale industry, academic institutions, environmental groups, and private citizens. Among federal agencies, BLM manages the land and the oil shale beneath it and develops regulations for its development. USGS describes the nature and extent of oil shale deposits and collects and disseminates information on the nation’s water resources. DOE, through its various offices, national laboratories, and arrangements with universities, advances energy technologies, including oil shale technology. The Environmental Protection Agency (EPA) sets standards for pollutants that could be released by oil shale development and reviews environmental impact statements, such as the PEIS. The Bureau of Reclamation (BOR) manages federally built water projects that store and distribute water in 17 western states and provides this water to users. BOR monitors the amount of water in storage and the amount of water flowing in the major streams and rivers, including the Colorado River, which flows through oil shale country and feeds these projects. BOR provides its monitoring data to federal and state agencies that are parties to three major federal, state, and international agreements, that together with other federal laws, court decisions, and agreements, govern how water within the Colorado River and its tributaries is to be shared with Mexico and among the states in which the river or its tributaries are located. These three major agreements are the Colorado River Compact of 1922, the Upper Colorado River Basin Compact of 1948, and the Mexican Water Treaty of 1944. The states of Colorado and Utah have regulatory responsibilities over various activities that occur during oil shale development, including activities that impact water. Through authority delegated by EPA under the Clean Water Act, Colorado and Utah regulate discharges into surface waters. Colorado and Utah also have authority over the use of most water resources within their respective state boundaries. They have established extensive legal and administrative systems for the orderly use of water resources, granting water rights to individuals and groups. Water rights in these states are not automatically attached to the land upon which the water is located. Instead, companies or individuals must apply to the state for a water right and specify the amount of water to be used, its intended use, and the specific point from where the water will be diverted for use, such as a specific point on a river or stream. Utah approves the application for a water right through an administrative process, and Colorado approves the application for a water right through a court proceeding. The date of the application establishes its priority—earlier applicants have preferential entitlement to water over later applicants if water availability decreases during a drought. These earlier applicants are said to have senior water rights. When an applicant puts a water right to beneficial use, it is referred to as an absolute water right. Until the water is used, however, the applicant is said to have a conditional water right. Even if the applicant has not yet put the water to use, such as when the applicant is waiting on the construction of a reservoir, the date of the application still establishes priority. Water rights in both Colorado and Utah can be bought and sold, and strong demand for water in these western states facilitates their sale. A significant challenge to the development of oil shale lies in the current technology to economically extract oil from oil shale. To extract the oil, the rock needs to be heated to very high temperatures—ranging from about 650 to 1,000 degrees Fahrenheit—in a process known as retorting. Retorting can be accomplished primarily by two methods. One method involves mining the oil shale, bringing it to the surface, and heating it in a vessel known as a retort. Mining oil shale and retorting it has been demonstrated in the United States and is currently done to a limited extent in Estonia, China, and Brazil. However, a commercial mining operation with surface retorts has never been developed in the United States because the oil it produces competes directly with conventional crude oil, which historically has been less expensive to produce. The other method, known as an in-situ process, involves drilling holes into the oil shale, inserting heaters to heat the rock, and then collecting the oil as it is freed from the rock. Some in-situ technologies have been demonstrated on very small scales, but other technologies have yet to be proven, and none has been shown to be economically or environmentally viable. Nevertheless, according to some energy experts, the key to developing our country’s oil shale is the development of an in-situ process because most of the richest oil shale is buried beneath hundreds to thousands of feet of rock, making mining difficult or impossible. Additional economic challenges include transporting the oil produced from oil shale to refineries because pipelines and major highways are not prolific in the remote areas where the oil shale is located and the large-scale infrastructure that would be needed to supply power to heat oil shale is lacking. In addition, average crude oil prices have been lower than the threshold necessary to make oil shale development profitable over time. Large-scale oil shale development also brings socioeconomic impacts. While there are obvious positive impacts such as the creation of jobs, increase in wealth, and tax and royalty payments to governments, there are also negative impacts to local communities. Oil shale development can bring a sizeable influx of workers, who along with their families, put additional stress on local infrastructure such as roads, housing, municipal water systems, and schools. Development from expansion of extractive industries, such as oil shale or oil and gas, has typically followed a “boom and bust” cycle in the West, making planning for growth difficult. Furthermore, traditional rural uses could be replaced by the industrial development of the landscape, and tourism that relies on natural resources, such as hunting, fishing, and wildlife viewing, could be negatively impacted. In addition to the technological, economic, and social challenges to developing oil shale resources, there are a number of significant environmental challenges. For example, construction and mining activities can temporarily degrade air quality in local areas. There can also be long- term regional increases in air pollutants from oil shale processing, upgrading, pipelines, and the generation of additional electricity. Pollutants, such as dust, nitrogen oxides, and sulfur dioxide, can contribute to the formation of regional haze that can affect adjacent wilderness areas, national parks, and national monuments, which can have very strict air quality standards. Because oil shale operations clear large surface areas of topsoil and vegetation, some wildlife habitat will be lost. Important species likely to be negatively impacted from loss of wildlife habitat include mule deer, elk, sage grouse, and raptors. Noise from oil shale operations, access roads, transmission lines, and pipelines can further disturb wildlife and fragment their habitat. In addition, visual resources in the area will be negatively impacted as people generally consider large-scale industrial sites, pipelines, mines, and areas cleared of vegetation to be visually unpleasant (see fig. 2 for a typical view within the Piceance Basin). Environmental impacts from oil shale development could be compounded by additional impacts in the area resulting from coal mining, construction, and extensive oil and gas development. Air quality and wildlife habitat appear to be particularly susceptible to the cumulative affect of these impacts, and according to some environmental experts, air quality impacts may be the limiting factor for the development of a large oil shale industry in the future. Lastly, the withdrawal of large quantities of surface water for oil shale operations could negatively impact aquatic life downstream of the oil shale development. Impacts to water resources are discussed in detail in the next section of this report. Oil shale development could have significant impacts on the quality and quantity of surface and groundwater resources, but the magnitude of these impacts is unknown because some technologies have yet to be commercially proven, the size of a future oil shale industry is uncertain, and knowledge of current water conditions and groundwater flow is limited. Despite not being able to quantify the impacts from oil shale development, hydrologists and engineers have been able to determine the qualitative nature of impacts because other types of mining, construction, and oil and gas development cause disturbances similar to impacts expected from oil shale development. According to these experts, in the absence of effective mitigation measures, impacts from oil shale development to water resources could result from disturbing the ground surface during the construction of roads and production facilities, withdrawing water from streams and aquifers for oil shale operations, underground mining and extraction, and discharging waste waters from oil shale operations. The quantitative impacts of future oil shale development cannot be measured with reasonable certainty at this time primarily because of three unknowns: (1) the unproven nature of in-situ technologies, (2) the uncertain size of a future oil shale industry, and (3) insufficient knowledge of current groundwater conditions. First, geological maps suggest that most of the prospective oil shale in the Uintah and Piceance Basins is more amenable to in-situ production methods rather than mining because the oil shale lies buried beneath hundreds to thousands of feet of rock. Studies have concluded that much of this rock is generally too thick to be removed economically by surface mining, and deep subsurface mines are likely to be costly and may recover no more than 60 percent of the oil shale. Although several companies have been working on the in-situ development of oil shale, none of these processes has yet been shown to be commercially viable. Most importantly, the extent of the impacts of in- situ retorting on aquifers is unknown, and it is uncertain whether methods for reclamation of the zones that are heated will be effective. Second, it is not possible to quantify impacts on water resources with reasonable certainty because it is not yet possible to predict how large an oil shale industry may develop. The size of the industry would have a direct relationship to water impacts. Within the PEIS, BLM has stated that the level and degree of the potential impacts of oil shale development cannot be quantified because this would require making many speculative assumptions regarding the potential of the oil shale, unproven technologies, project size, and production levels. Third, hydrologists at USGS and BLM state that not enough is known about current surface water and groundwater conditions in the Piceance and Uintah Basins. More specifically, comprehensive baseline conditions for surface water and groundwater do not exist. Therefore, without knowledge of current conditions, it is not possible to detect changes in groundwater conditions, much less attribute changes to oil shale development. Impacts to water resources from oil shale development would result primarily from disturbing the ground surface, withdrawing surface water and groundwater, underground mining, and discharging water from operations. In the absence of effective mitigation measures, ground disturbance activities associated with oil shale development could degrade surface water quality, according to the literature we reviewed and water experts to whom we spoke. Both mining and the in-situ production of oil shale are expected to involve clearing vegetation and grading the surface for access roads, pipelines, production facilities, buildings, and power lines. In addition, the surface that overlies the oil shale would need to be cleared and graded in preparation for mining or drilling boreholes for in-situ extraction. The freshly cleared and graded surfaces would then be exposed to precipitation, and subsequent runoff would drain downhill toward existing gullies and streams. If not properly contained or diverted away from these streams, this runoff could contribute sediment, salts, and possibly chemicals or oil shale products into the nearby streams, degrading their water quality. Surface mining would expose the entire area overlying the oil shale that is to be mined while subsurface mining would expose less surface area and thereby contribute less runoff. One in-situ operation proposed by Shell for its RD&D leases would require clearing of the entire surface overlying the oil shale because wells are planned to be drilled as close as 10 feet apart. Other in-situ operations, like those proposed by American Shale Oil Company and ExxonMobil, envision directionally drilling wells in rows that are far enough apart so that strips of undisturbed ground would remain. The adverse impacts from ground disturbances would remain until exposed surfaces were properly revegetated. If runoff containing excessive sediment, salts, or chemicals finds its way into streams, aquatic resources could be adversely impacted, according to the water experts to whom we spoke and the literature we reviewed. Although aquatic populations can handle short-term increases in sediment, long-term increases could severely impact plant and animal life. Sediment could suffocate aquatic plants and decrease the photosynthetic activity of these plants. Sediment could also suffocate invertebrates, fish, and incubating fish eggs and adversely affect the feeding efficiency and spawning success of fish. Sedimentation would be exacerbated if oil shale activities destroy riparian vegetation because these plants often trap sediment, preventing it from entering streams. In addition, toxic substances derived from spills, leaks from pipelines, or leaching of waste rock piles could increase mortality among invertebrates and fish. Surface and underground mining of oil shale will produce waste rock that, according to the literature we reviewed and water experts to whom we spoke, could contaminate surface waters. Mined rock that is retorted on site would produce large quantities of spent shale after the oil is extracted. Such spent shale is generally stored in large piles that would also be exposed to surface runoff that could possibly transport sediment, salts, selenium, metals, and residual hydrocarbons into receiving streams unless properly stabilized and reclaimed. EPA studies have shown that water percolating through such spent shale piles transports pollutants long after abandonment of operations if not properly mitigated. In addition to stabilizing and revegetating these piles, mitigation measures could involve diverting runoff into retention ponds, where it could be treated, and lining the surface below waste rock with impervious materials that could prevent water from percolating downward and transporting pollutants into shallow groundwater. However, if improperly constructed, retention ponds would not prevent the degradation of shallow groundwater, and some experts question whether the impervious materials would hold up over time. Withdrawing water from streams and rivers for oil shale operations could have temporary adverse impacts on surface water, according to the experts to whom we spoke and the literature we reviewed. Oil shale operations need water for a number of activities, including mining, constructing facilities, drilling wells, generating electricity for operations, and reclamation of disturbed sites. Water for most of these activities is likely to come from nearby streams and rivers because it is more easily accessible and less costly to obtain than groundwater. Withdrawing water from streams and rivers would decrease flows downstream and could temporarily degrade downstream water quality by depositing sediment within the stream channels as flows decrease. The resulting decrease in water would also make the stream or river more susceptible to temperature changes—increases in the summer and decreases in the winter. Elevated temperatures could have adverse impacts on aquatic life, including fishes and invertebrates, which need specific temperatures for proper reproduction and development. Elevated water temperatures would also decrease dissolved oxygen, which is needed by aquatic animals. Decreased flows could also damage or destroy riparian vegetation. Removal of riparian vegetation could exacerbate negative impacts on water temperature and oxygen because such vegetation shades the water, keeping its temperature cooler. Similarly, withdrawing water from shallow aquifers—an alternative water source—would have temporary adverse impacts on groundwater resources. Withdrawals would lower water levels within these shallow aquifers and the nearby streams and springs to which they are connected. Extensive withdrawals could reduce groundwater discharge to connected streams and springs, which in turn could damage or remove riparian vegetation and aquatic life. Withdrawing water from deeper aquifers could have longer-term impacts on groundwater and connected streams and springs because replenishing these deeper aquifers with precipitation generally takes longer. Underground mining would permanently alter the properties of the zones that are mined, thereby affecting groundwater flow through these zones, according to the literature we reviewed and the water experts to whom we spoke. The process of removing oil shale from underground mines would create large tunnels from which water would need to be removed during mining operations. The removal of this water through pumping would decrease water levels in shallow aquifers and decrease flows to streams and springs that are connected. When mining operations cease, the tunnels would most likely be filled with waste rock, which would have a higher degree of porosity and permeability than the original oil shale that was removed. Groundwater flow through this material would increase permanently, and the direction and pattern of flows could change permanently. Flows through the abandoned tunnels could decrease ground water quality by increasing concentrations of salts, metals, and hydrocarbons within the groundwater. In-situ extraction would also permanently alter aquifers because it would heat the rock to temperatures that transform the solid organic compounds within the rock into liquid hydrocarbons and gas that would fracture the rock upon escape. Water would be cooked off during the heating processes. Some in-situ operations envision using a barrier to isolate thick zones of oil shale with intervening aquifers from any adjacent aquifers and pumping out all the groundwater from this isolated area before retorting. Other processes, like those envisioned by ExxonMobil and AMSO, involve trying to target thinner oil shale zones that do not have intervening aquifers and, therefore, would theoretically not disturb the aquifers. However, these processes involve fracturing the oil shale, and it is unclear whether the fractures could connect the oil shale to adjacent aquifers, possibly contaminating the aquifer with hydrocarbons. After removal of hydrocarbons from retorted zones, the porosity and permeability of the zones are expected to increase, thereby allowing increased groundwater flow. Some companies propose rinsing retorted zones with water to remove residual hydrocarbons. However, the effectiveness of rinsing is unproven, and residual hydrocarbons, metals, salts, and selenium that were mobilized during retorting could contaminate the groundwater. Furthermore, the long-term effects of groundwater flowing through retorted zones are unknown. The discharge of waste waters from operations would temporarily increase water flows in receiving streams. According to BLM’s PEIS, waste waters from oil shale operations that could be discharged include waters used in extraction, cooling, the production of electricity, and sewage treatment, as well as drainage water collected from spent oil shale piles and waters pumped from underground mines or wells used to dewater the retorted zones. Discharges could decrease the quality of downstream water if the discharged water is of lower quality, has a higher temperature, or contains less oxygen. Lower-quality water containing toxic substances could increase fish and invertebrate mortality. Also, increased flow into receiving streams could cause downstream erosion. However, at least one company is planning to recycle waste water and water produced during operations so that discharges and their impacts could be substantially reduced. While commercial oil shale development requires water for numerous activities throughout its life cycle, estimates vary widely for the amount of water needed to commercially produce oil shale. This variation in estimates stems primarily from the uncertainty associated with reclamation technologies for in-situ oil shale development and because of the various ways to generate power for oil shale operations, which use different amounts of water. Based on our review of available information for the life cycle of oil shale production, existing estimates suggest that from about 1 to 12 barrels of water could be needed for each barrel of oil produced from in-situ operations, with an average of about 5 barrels. About 2 to 4 barrels of water could be needed for each barrel of oil produced from mining operations with a surface retort. Water is needed for five distinct groups of activities that occur during the life cycle of oil shale development: (1) extraction and retorting, (2) upgrading of shale oil, (3) reclamation, (4) power generation, and (5) population growth associated with oil shale development. Extraction and retorting. During extraction and retorting, water is used for building roads, constructing facilities, controlling dust, mining and handling ore, drilling wells for in-situ extraction, cooling of equipment and shale oil, producing steam, in-situ fracturing of the retort zones, and preventing fire. Water is also needed for on-site sanitary and potable uses. Upgrading of shale oil. Water is needed to upgrade, or improve, the quality of the produced shale oil so that it can be easily transported to a refinery. The degree to which the shale oil needs to be upgraded varies according to the retort process. Shale oil produced by surface retorting generally requires more upgrading, and therefore, more water than shale oil produced from in-situ operations that heat the rock at lower temperatures and for a longer time, producing higher-quality oil. Reclamation. During reclamation of mine sites, water is needed to cool, compact, and stabilize the waste piles of retorted shale and to revegetate disturbed surfaces, including the surfaces of the waste piles. For in-situ operations, in addition to the typical revegetation of disturbed surfaces, as shown in figure 3, water also will be needed for reclamation of the subsurface retorted zones to remove residual hydrocarbons. The volume of water that would be needed to rinse the zones at present is uncertain and could be large, depending primarily on how many times the zones need to be rinsed. In addition, some companies envision reducing water demands for reclamation, as well as for extracting, retorting, and upgrading, by recycling water produced during oil shale operations or by treating and using water produced from nearby oil and gas fields. Recycling technology, however, has not been shown to be commercially viable for oil shale operations, and there could be legal restrictions on using water produced from oil and gas operations. Power generation. Water is also needed throughout the life cycle of oil shale production for generating electricity from power plants needed in operations. The amount of water used to produce this electricity varies significantly according to generation and cooling technologies employed. For example, thermoelectric power plants use a heat source to make steam, which turns a turbine connected to a generator that makes the electricity. The steam is captured and cooled, often with additional water, and is condensed back into water that is then recirculated through the system to generate more steam. Plants that burn coal to produce steam use more water for cooling than combined cycle natural gas plants, which combust natural gas to turn a turbine and then capture the waste heat to produce steam that turns a second turbine, thereby producing more electricity per gallon of cooling water. Thermoelectric plants can also use air instead of water to condense the steam. These plants use fans to cool the steam and consume virtually no water, but are less efficient and more costly to run. Population growth. Additional water would be needed to support an anticipated increase in population due to oil shale workers and their families who migrate into the area. This increase in population can increase the demand for water for domestic uses. In isolated rural areas where oil shale is located, sufficiently skilled workers may not be available. Based on studies that we reviewed, the total amount of water needed for in-situ oil shale operations could vary widely, from about 1 to 12 barrels of water per barrel of oil produced over the entire life cycle of oil shale operations. The average amount of water needed for in-situ oil shale production as estimated by these studies is about 5 barrels. This range is based on information contained primarily in studies published in 2008 and 2009 by ExxonMobil, Shell, the Center for Oil Shale Technology and Research at the Colorado School of Mines, the National Oil Shale Association, and contractors to the state of Colorado. Figure 3 shows Shell’s in-situ experimental site in Colorado. Because only two studies examined all five groups of activities that comprise the life cycle of oil shale production, we reviewed water estimates for each group of activities that is described within each of the eight studies we reviewed. We calculated the minimum and the maximum amount of water that could be needed for in-situ oil shale development by summing the minimum estimates and the maximum estimates, respectively, for each group of activities. Differences in estimates are due primarily to the uncertainty in the amount of water needed for reclamation and to the method of generating power for operations. Table 1 shows the minimum, maximum, and average amounts of water that could be needed for each of the five groups of activities that comprise the life cycle of in-situ oil shale development. The table shows that reclamation activities contribute the largest amount of uncertainty to the range of total water needed for in-situ oil shale operations. Reclamation activities, which have not yet been developed, contribute from 0 to 5.5 barrels of water for each barrel of oil produced, according to the studies we analyzed. This large range is due primarily to the uncertainty in how much rinsing of retorted zones would be necessary to remove residual hydrocarbons and return groundwater to its original quality. On one end of the range, scientists at ExxonMobil reported that retorted zones may be reclaimed by rinsing them several times and using 1 barrel of water or less per barrel of oil produced. However, another study suggests that many rinses and many barrels of water may be necessary. For example, modeling by the Center for Oil Shale Technology and Research suggests that if the retorted zones require 8 or 10 rinses, 5.5 barrels of water could be needed for each barrel of oil produced. Additional uncertainty lies in estimating how much additional porosity in retorted zones will be created and in need of rinsing. Some scientists believe that the removal of oil will double the amount of pore space, effectively doubling the amount of water needed for rinsing. Other scientists question whether the newly created porosity will have enough permeability so that it can be rinsed. Also, the efficiency of recycling waste water that could be used for additional rinses adds to the amount of uncertainty. For example, ExxonMobil scientists believe that almost no new fresh water would be needed for reclamation if it can recycle waste water produced from oil shale operations or treat and use saline water produced from nearby oil and gas wells. Table 1 also shows that the water needs for generating power contribute significant uncertainty to the estimates of total water needed for in-situ extraction. Estimates of water needed to generate electricity range from near zero for thermoelectric plants that are cooled by air to about 3.4 barrels for coal-fired thermoelectric plants that are cooled by water, according to the studies that we analyzed. These studies suggested that from about 0.7 to about 1.2 barrels of water would be needed if electricity is generated from combined cycle plants fueled by natural gas, depending on the power requirements of the individual oil shale operation. Overall power requirements are large for in-situ operations because of the many electric heaters used to heat the oil shale over long periods of time—up to several years for one technology proposed by industry. However, ExxonMobil, Shell, and AMEC—a contractor to the state of Colorado— believe that an oil shale industry of significant size will not use coal-fired electric power because of its greater water requirements and higher carbon dioxide emissions. In fact, according to an AMEC study, estimates for power requirements of a 1.5 million-barrel-per-day oil shale industry would exceed the current coal-fired generating capacity of the nearest plant by about 12 times, and therefore would not be feasible. Industry representatives with whom we spoke said that it is more likely that a large oil shale industry would rely on natural gas-powered combined cycle thermoelectric plants, with the gas coming from gas fields within the Piceance and Uintah Basins or from gas produced during the retort process. ExxonMobil reports that it envisions cooling such plants with air, thereby using next to no water for generating electricity. However, cooling with air can be more costly and will ultimately require more electricity. In addition, table 1 shows that extracting and retorting activities and upgrading activities also contribute to the uncertainty in the estimates of water needed for in-situ operations, but this uncertainty is significantly less than that of reclamation activities or power generation. The range for extraction and retorting is from 0 to 1 barrel of water. The range for upgrading the produced oil is from 0.6 to 1.6 barrels of water, with both the minimum and maximum of this range cited in a National Oil Shale Association study. Hence, each of these two groups of activities contribute about 1 barrel of water to the range of estimates for the total amount of water needed for the life cycle of in-situ oil shale production. Last, table 1 shows there is little variation in the likely estimates of water needed to support the anticipated population increase associated with in- situ oil shale development. Detailed analyses of water needs for population growth associated with an oil shale industry are present in the PEIS, a study by the URS Corporation, and a study completed by the Institute for Clean and Secure Energy at the University of Utah. These estimates often considered the number of workers expected to move into the area, the size of the families to which these workers belong, the ratio of single-family to multifamily housing that would accommodate these families, and per capita water consumption associated with occupants of different housing types. Figure 4 compares the total water needs over the life cycle of in-situ oil shale development according to the various sources of power generation, as suggested by the studies we reviewed. This is a convenient way to visualize the water needs according to power source. The minimum, average, and maximum values are the sum of the minimum, average, and maximum water needs, respectively, for all five groups of activities. Most of the difference between the minimum and the maximum of each power type is due to water needed for reclamation. Estimates of water needed for mining oil shale and retorting it at the surface vary from about 2 to 4 barrels of water per barrel of oil produced over the entire life cycle of oil shale operations. The average is about 3 barrels of water. This range is based primarily on information obtained through a survey of active oil shale companies completed by the National Oil Shale Association in 2009 and information obtained from three different retorts, as published in a report by the Office of Technology Assessment (OTA) in 1980. Figure 5 shows a surface retort that is operating today at a pilot plant. Because only two studies contained reliable information for all five groups of activities that comprise the life cycle of oil shale production, we reviewed water estimates for each group of activities that is described within each of the eight studies we reviewed. We calculated the minimum and the maximum amount of water that could be needed for mining oil shale by summing the minimum estimates and the maximum estimates, respectively, for each group of activities. The range of water estimates for mining oil shale is far narrower than that of in-situ oil shale production because, according to the studies we reviewed, there are no large differences in water estimates for any of the activities. Table 2 shows the minimum, maximum, and average amounts of water that could be needed for each of the groups of activities that comprise the life cycle of oil shale development that relies upon mining and surface retorting. Unlike for in-situ production, we could not segregate extraction and retorting activities from upgrading activities because these activities were grouped together in some of the studies on mining and surface retorting. Nonetheless, as shown in table 2, the combination of these activities contributes 1 barrel of water to the total range of estimated water needed for the mining and surface retorting of oil shale. This 1 barrel of water results primarily from the degree to which the resulting shale oil would need upgrading. An oil shale company representative told us that estimates for upgrading shale oil vary due to the quality of the shale oil produced during the retort process, with higher grades of shale oil needing less processing. Studies in the OTA report did not indicate much variability in water needs for the mining of the oil shale and the handling of ore. Retorts also produce water—about half a barrel for each barrel of oil produced—by freeing water that is locked in organic compounds and minerals within the oil shale. Studies in the OTA report took this produced water into consideration and reported the net anticipated water use. Table 2 also shows that differences in water estimates for generating power contributed about 1 barrel of water to the range of water needed for mining and surface retorting. We obtained water estimates for power generation either directly from the studies or from power requirements cited within the studies. Estimates of water needed range from zero barrels for electricity coming from thermoelectric plants that are cooled by air to about 0.9 barrels for coal-fired thermoelectric plants that are cooled with water. About 0.3 barrels of water are needed to generate electricity from combined cycle plants fueled by natural gas. Startup oil shale mining operations, which have low overall power requirements, are more likely to use electricity from coal-fired power plants, according to data supplied by oil shale companies, because such generating capacity is available locally. However, a large-scale industry may generate electricity from the abundant natural gas in the area or from gas that is produced during the retorting of oil shale. Water needs for reclamation or for supporting an anticipated increase in population associated with mining oil shale show little variability in the studies that we reviewed. Figure 6 compares the total water needs over the life cycle of mining and surface retorting of oil shale according to the various sources of power generation. The minimum, average, and maximum values are the sum of the minimum, average, and maximum water needs, respectively, for all five groups of activities. Water is likely to be available for the initial development of an oil shale industry, but the eventual size of the industry may be limited by the availability of water and demands for water to meet other needs. Oil shale companies operating in Colorado and Utah will need to have water rights to develop oil shale, and representatives from all of the companies with which we spoke are confident that they hold at least enough water rights for their initial projects and will likely be able to purchase more rights in the future. Sources of water for oil shale will likely be surface water in the immediate area, such as the White River, but groundwater could also be used. Nonetheless, the possibility of competing municipal and industrial demands for future water, a warming climate, future needs under existing compacts, and additional water needs for the protection of threatened and endangered fishes, may eventually limit the size of a future oil shale industry. Companies with interest in oil shale already hold significant water rights in the Piceance Basin of Colorado, and representatives from all of the companies with whom we spoke felt confident that they either had or could obtain sufficient water rights to supply at least their initial operations in the Piceance and Uintah Basins. Western Resource Advocates, a nonprofit environmental law and policy organization, conducted a study of water rights ownership in the Colorado and White River Basins of Colorado and concluded that companies have significant water rights in the area. For example, the study found that Shell owns three conditional water rights for a combined diversion of about 600 cubic feet per second from the White River and one of its tributaries and has conditional rights for the combined storage of about 145,000 acre-feet in two proposed nearby reservoirs. Similarly, the study found that ExxonMobil owns conditional storage capacities of over 161,000 acre-feet on 17 proposed reservoirs in the area. In Utah, the Oil Shale Exploration Company (OSEC), which owns an RD&D lease, has obtained a water right on the White River that appears sufficient for reopening the White River Mine and has cited the possibility of renewing an expired agreement with the state of Utah for obtaining additional water from shallow aquifers connected to the White River. Similarly, Red Leaf Resources cites the possibility of drilling a water well on the state-owned lands that it has leased for oil shale development. In addition to exercising existing water rights and agreements, there are other options for companies to obtain more water rights in the future, according to state officials in Colorado and Utah. In Colorado, companies can apply for additional water rights in the Piceance Basin on the Yampa and White Rivers. Shell recently applied—but subsequently withdrew the application—for conditional rights to divert up to 375 cubic feet per second from the Yampa River for storage in a proposed reservoir that would hold up to 45,000 acre-feet for future oil shale development. In Utah, however, officials with the State Engineer’s office said that additional water rights are not available, but that if companies want additional rights, they could purchase them from other owners. Many people who are knowledgeable on western water rights said that the owners of these rights in Utah and Colorado would most likely be agricultural users, based on a history of senior agricultural rights being sold to developers in Colorado. For example, the Western Resource Advocates study identified that in the area of the White River, ExxonMobil Corporation has acquired full or partial ownership in absolute water rights on 31 irrigation ditches from which the average amount of water diverted per year has exceeded 9,000 acre-feet. These absolute water rights have appropriation dates ranging from 1883 through 1918 and are thus senior to holders of many other water rights, but their use would need to be changed from irrigation or agricultural to industrial in order to be used for oil shale. Also, additional rights may be available in Utah from other sources. According to state water officials in Utah, the settlement of an ongoing legal dispute between the state and the Ute Indian tribe could result in the tribe gaining rights to 105,000 acre-feet per year in the Uintah Basin. These officials said that it is possible that the tribe could lease the water rights to oil shale companies. There are also two water conservancy districts that each hold rights to tens of thousands of acre-feet per year of water in the Uintah Basin that could be developed for any use as determined by the districts, including for oil shale development. Most of the water needed for oil shale development is likely to come first from surface flows, as groundwater is more costly to extract and generally of poorer quality in the Piceance and Uintah Basins. However, companies may use groundwater in the future should they experience difficulties in obtaining rights to surface water. Furthermore, water is likely to come initially from surface sources immediately adjacent to development, such as the White River and its tributaries that flow through the heart of oil shale country in Colorado and Utah, because the cost of pumping water over long distances and rugged terrain would be high, according to water experts. Shell’s attempt to obtain water from the more distant Yampa River shows the importance of first securing nearby sources. In relationship to the White River, the Yampa lies about 20 to 30 miles farther north and at a lower elevation than Shell’s RD&D leases. Hence, additional costs would be necessary to transport and pump the Yampa’s water to a reservoir for storage and eventual use. Shell withdrew its application citing the global economic downturn. At least one company has considered obtaining surface water from the even more distant Colorado River, about 30 to 50 miles to the south of the RD&D leases where oil shale companies already hold considerable water rights, but again, the costs of transporting and pumping water would be greater. Although water for initial oil shale development in Utah is also likely to come from the White River as indicated by OSEC’s interest, water experts have cited the Green River as a potential water source. However, the longer distance and rugged terrain is likely to be challenging. Figure 7 shows the locations of the oil shale resource areas and their proximity to local surface water sources. In addition to surface water, oil shale companies could use groundwater for operations should more desirable surface water sources be unavailable. However, companies would need to acquire the rights to this groundwater. Shallow groundwater in the Piceance and Uintah Basins occurs primarily within alluvial aquifers, which are aquifers composed of unconsolidated sand and gravel associated with nearby streams and rivers. The states of Utah and Colorado refer to these aquifers legally as tributary waters, or waters that are connected to surface waters and hence are considered to be part of the surface water source when appropriating water rights. Any withdrawal of tributary water is considered to be a withdrawal from the adjacent or nearby stream or river. Less is known about deep groundwater in the Piceance and Uintah Basins, but hydrologists consider it to be of lesser quality, with the water generally becoming increasingly saline with depth. State officials in Utah said that they consider this deeper groundwater to be tributary water, and state officials in Colorado said that they generally consider this deeper water also to be tributary water but will allow water rights applicants to prove otherwise. In the Piceance and Uintah Basins, groundwater is not heavily used, illustrating the reluctance of water users to tap this source. Nevertheless, Shell is considering the use of groundwater, and ExxonMobil is considering using water co-produced with natural gas from nearby but deeper formations in the Piceance Basin. Also, BLM notes that there is considerable groundwater in the regional Bird’s Nest Aquifer in the area surrounding OSEC’s RD&D lease in the Uintah Basin. In addition, representatives of oil shale companies said they plan to use water that is released from the organic components of oil shale during the retort process. Since this water is chemically bound within the solid organic components rather than being in a liquid phase, it is not generally viewed as being groundwater, but it is unclear as to how it would be regulated. Developing a sizable oil shale industry may take many years—perhaps 15 or 20 years by some industry and government estimates—and such an industry may have to contend with increased demands for water to meet other needs. Substantial population growth and its correlative demand for water are expected in the oil shale regions of Colorado and Utah. This region in Colorado is a fast-growing area. State officials expect that the population within the region surrounding the Yampa, White, and Green Rivers in Colorado will triple between 2005 and 2050. These officials expect that this added population and corresponding economic growth by 2030 will increase municipal and industrial demands for water, exclusive of oil shale development, by about 22,000 acre-feet per year, or a 76 percent increase from 2000. Similarly in Utah, state officials expect the population of the Uintah Basin to more than double its 1998 size by 2050 and that correlative municipal and industrial water demands will increase by 7,000 acre-feet per year, or an increase of about 30 percent since the mid-1990s. Municipal officials in two communities adjacent to proposed oil shale development in Colorado said that they were confident of meeting their future municipal and industrial demands from their existing senior water rights, and as such will probably not be affected by the water needs of a future oil shale industry. However, large withdrawals could impact agricultural interests and other downstream water users in both states, as oil shale companies may purchase existing irrigation and agricultural rights for their oil shale operations. State water officials in Colorado told us that some holders of senior agricultural rights have already sold their rights to oil shale companies. A future oil shale industry may also need to contend with a decreased physical supply of water regionwide due to climate change. A contractor to the state of Colorado ran five projections through a number of climate models and found that their average result suggested that by 2040, a warming climate may reduce the amount of water in the White River in Colorado by about 13 percent, or 42,000 acre-feet. However, there was much variability among the five results, ranging from a 40 percent decrease to a 16 percent increase in today’s flow and demonstrating the uncertainty associated with climate predictions. Nevertheless, any decrease would mean that less water would be available downstream in Utah. Because of a warmer climate, the contractor also found that water needed to irrigate crops could increase significantly in the White River Basin, but it is uncertain whether the holders of the water rights used to irrigate the crops would be able to secure this additional water. Simultaneously, the model shows that summer precipitation is expected to decrease, thus putting pressure on farmers to withdraw even more water from local waterways. In addition, the contractor predicted that more precipitation is likely to fall as rain rather than snow in the early winter and late spring. Because snow functions as a natural storage reservoir by releasing water into streams and aquifers as temperatures rise, less snow means that storage and runoff schedules will be altered and less water may be available at different times of the year. Although the model shows that the White River is expected to have reduced flows due to climate change, the same model shows that the Yampa is more likely to experience an increased flow because more precipitation is expected to fall in the mountains, which are its headwaters. Hence, oil shale companies may look to the Yampa for additional water if restrictions on the White are too great, regardless of increased costs to transport the water. While there is not a similar study on climate change impacts for Utah, it is likely that some of the impacts will be similar, considering the close proximity and similar climates in the Uintah and Piceance Basins. Colorado’s and Utah’s obligations under interstate compacts could further reduce the amount of water available for development. The Colorado River Compact of 1922, which prescribes how the states through which the Colorado River and its tributaries flow share the river’s water, is based on uncharacteristically high flows, as cited in a study contracted by the state of Colorado. Water regulators have since shown that the flow rates used to allocate water under the compact may be 21 percent higher than average historical flow rates, thereby overestimating the amount of water that may be available to share. As a result, the upstream states of Colorado and Utah may not have as much water to use as they had originally planned and may be forced to curtail water consumption so that they can deliver the amount of water that was agreed on in the compact to the downstream states of Arizona, Nevada, and California. Another possible limitation on withdrawals from the Colorado River system is the requirement to protect certain fish species under the Endangered Species Act. Federal officials stated that withdrawals from the Colorado River system, including its tributaries the White and Green Rivers, could be limited by the amount of flow that is necessary to sustain populations of threatened or endangered fishes. Although there are currently no federally mandated minimum flow requirements on the White River in either Utah or Colorado, the river is home to populations of the federally endangered Colorado Pikeminnow, and the Upper Colorado Recovery Program is currently working on a biological opinion which may prescribe minimum flow requirements. In addition, the Green River in Utah is home to populations of four threatened or endangered fishes: the Colorado Pikeminnow, the Razorback Sucker, the Humpback Chub, and the Bonytail Chub. For this reason, agency officials are recommending minimum flow requirements on the Green, which could further restrict the upstream supply of available water. Although oil shale companies own rights to a large amount of water in the oil shale regions of Colorado and Utah, there are physical and legal limits on how much water they can ultimately withdraw from the region’s waterways, and thus limits on the eventual size of the overall industry. Physical limits are set by the amount of water that is present in the river, and the legal limit is the sum of the water that can be legally withdrawn from the river as specified in the water rights held by downstream users. Examining physical limits can demonstrate how much water may be available to all water users. Subtracting the legal limit can demonstrate how much water is available for additional development, providing that current water rights and uses do not change in the future. The state of Colorado refers to this remaining amount of water in the river as that which is physically and legally available. To put the water needs of a potential oil shale industry in Colorado into perspective, we compared the needs of oil shale industries of various sizes to what currently is physically available in the White River at Meeker, Colorado—a small town immediately east of high-quality oil shale deposits in the Piceance Basin. We also compared the water needs of an oil shale industry to what may be physically and legally available from the White River in 2030. Table 3 shows scenarios depicting the amounts of water that would be needed to develop an oil shale industry of various sizes that relies on mining and surface retorting, based on the studies we examined. Table 4 shows similar scenarios for an oil shale industry that uses in-situ extraction, based on the studies that we examined. The sizes are based on industry and expert opinion and are not meant to be predictions. Both tables assume water demands for peak oil shale production rates, but water use may not follow such a pattern. For example, water use for reclamation activities may not fully overlap with water use for extraction. Also, an industry composed of multiple operations is likely to have some operations at different stages of development. Furthermore, because of the natural variability of stream flows, both on an annual basis and from year-to-year, reservoirs would need to be built to provide storage, which could be used to release a consistent amount of water on a daily basis. Data maintained by the state of Colorado indicate the amount of water that is physically available in the Whiter River at Meeker, Colorado, averages about 472,000 acre-feet per year. Table 3 suggests that this is much more water than is needed to support the water needs for all the sizes of an industry relying on mining and surface retorting that we considered. Table 4, however, shows that an industry that uses in-situ extraction could be limited just by the amount of water physically available in the White River at Meeker, Colorado. For example, based on an oil shale industry that uses about 12 barrels of water for each barrel of shale oil it produces, such an industry could not reach 1 million barrels per day if it relied solely on physically available water from the White River. Comparing an oil shale industry’s needs to what is physically and legally available considers the needs of current users and the anticipated needs of future users, rather than assuming all water in the river is available to an oil shale industry. The amount of water that is physically and legally available in the White River at Meeker is depicted in table 5. According to the state of Colorado’s computer models, holders of water rights downstream use on average about 153,000 acre-feet per year, resulting in an average of about 319,000 acre-feet per year that is currently physically and legally available for development near Meeker. By 2030, however, the amount of water that is physically and legally available is expected to change because of increased demand and decreased supply. After taking into account an anticipated future decrease of 22,000 acre-feet per year of water due to a growing population, about 297,000 acre-feet per year may be available for future development if current water rights and uses do not change by 2030. However, there may be additional decreases in the amount of physically and legally available water in the White River due to climate change, demands under interstate agreements, and water requirements for threatened or endangered fishes, but we did not include these changes in table 5 because of the large uncertainty associated with estimates. Comparing the scenarios in table 4 to the amount of water that is physically and legally available in table 5 shows the sizes that an in-situ oil shale industry may reach relying solely on obtaining new rights on the White River. The scenarios in table 4 suggest that if an in-situ oil shale industry develops to where it produces 500,000 barrels of oil per day—an amount that some experts believe is reasonable—an industry of this size could possibly develop in Colorado even if it uses about 12 barrels of water per barrel of shale oil it produces. Similarly, the scenarios suggest that an in-situ industry that uses about 5 barrels of water per barrel of oil produced—almost the average from the studies in which power comes from combined cycle natural gas plants—could grow to 1 million barrels of oil per day using only the water that appears to be physically and legally available in 2030 in the White River. Table 4 also shows that an industry that uses just 1 barrel of water per barrel of shale oil produced could grow to over 2.5 million barrels of oil per day. Regardless of these comparisons, more water or less water could be available in the future because it is unlikely that water rights will remain unchanged until 2030. For example, officials with the state of Colorado reported that conditional water rights—those rights held but not used— are not accounted for in the 297,000 acre-feet per year of water that is physically and legally available because holders of these rights are not currently withdrawing water. These officials also said that the amount of conditional water rights greatly exceeds the flow in the White River near Meeker, and if any of these conditional rights are converted to absolute rights and additional water is then withdrawn downstream, even less water will be available for future development. However, officials with the state of Colorado said that some of these conditional water rights are already owned by oil shale companies, making it unnecessary for some companies to apply for new water rights. In addition, they said, some of the absolute water rights that are accounted for in the estimated 153,000 acre-feet per year of water currently being withdrawn are already owned by oil shale companies. These are agricultural rights that were purchased by oil shale interests who leased them back to the original owners to continue using them for agricultural purposes. Should water not be available from the White River, companies would need to look to groundwater or surface water outside of the immediate area. There are less data available to predict future water supplies in Utah’s oil shale resource area. The state of Utah did not provide us summary information on existing water rights held by oil shale companies. According to the state of Colorado, the average annual physical flow of the White River near the Colorado-Utah border is about 510,000 acre-feet per year. Any amount withdrawn from the White River in Colorado would be that much less water that would be available for development downstream in Utah. The state of Utah estimates that the total water supply of the Uintah Basin, less downstream obligations under interstate compacts, is 688,000 acre-feet per year. Much of the surface water contained in this amount is currently being withdrawn, and water rights have already been filed for much of the remaining available surface water. Although the federal government sponsors research on the nexus between oil shale development and water, a lack of comprehensive data on the condition of surface water and groundwater and their interaction limit efforts to monitor the future impacts of oil shale development. Currently DOE funds some research related to oil shale and water resources, including research on water rights, water needs, and the impacts of oil shale development on water quality. Interior also performs limited research on characterizing surface and groundwater resources in oil shale areas and is planning some limited monitoring of water resources. However, there is general agreement among those we contacted— including state personnel who regulate water resources, federal agency officials responsible for studying water, water researchers, and water experts—that this ongoing research is insufficient to monitor and then subsequently mitigate the potential impacts of oil shale development on water resources. In addition, DOE and Interior officials noted that they seldom formally share the information on their water-related research with each other. DOE has sponsored most of the oil shale research that involves water- related issues. This research consists of projects managed by the National Energy Technology Laboratory (NETL), the Office of Naval Petroleum and Oil Shale Reserves, and the Idaho National Laboratory. As shown in table 6, DOE has sponsored 13 of 15 projects initiated by the federal government since June 2006. DOE’s projects account for almost 90 percent of the estimated $5 million that is to be spent by the federal government on water-related oil shale research through 2013. Appendix II contains a list and description of these projects. NETL sponsors the majority of the water-related oil shale research currently funded by DOE. Through workshops, NETL gathers information to prioritize research. For example, in October 2007, NETL sponsored the Oil Shale Environmental Issues and Needs Workshop that was attended by a cross-section of stakeholders, including officials from BLM and state water regulatory agencies, as well as representatives from the oil shale industry. One of the top priorities that emerged from the workshop was to develop an integrated regional baseline for surface water and groundwater quality and quantity. As we have previously reported, after the identification of research priorities, NETL solicits proposals and engages in a project selection process. We identified seven projects involving oil shale and water that NETL awarded since June 2006. The University of Utah, Colorado School of Mines, the Utah Geological Survey, and the Idaho National Laboratory (INL) are performing the work on these projects. These projects cover topics such as water rights, water needs for oil shale development, impacts of retorting on water quality, and some limited groundwater modeling. One project conducted by the Colorado School of Mines involves developing a geographic information system for storing, managing, analyzing, visualizing, and disseminating oil shale data from the Piceance Basin. Although this project will provide some baseline data on surface water and groundwater and involves some theoretical groundwater modeling, the project’s researchers told us that these data will neither be comprehensive nor complete. In addition, NETL-sponsored research conducted at the University of Utah involves examining the effects of oil shale processing on water quality, new approaches to treat water produced from oil shale operations, and water that can be recycled and reused in operations. INL is sponsoring and performing research on four water-related oil shale projects while conducting research for NETL and the Office of Naval Petroleum and Oil Shale Reserves. The four projects that INL is sponsoring were self-initiated and funded internally through DOE’s Laboratory Directed Research and Development program. Under this program, the national laboratories have the discretion to self-initiate independent research and development, but it must focus on the advanced study of scientific or technical problems, experiments directed toward proving a scientific principle, or the early analysis of experimental facilities or devices. Generally, the researchers propose projects that are judged by peer panels and managers for their scientific merits. An INL official told us they selected oil shale and water projects because unconventional fossil fuels, which include oil shale, are a priority in which they have significant expertise. According to DOE officials, one of the projects managed by the Office of Naval Petroleum and Oil Shale Reserves is directed at research on the environmental impacts of unconventional fuels. The Los Alamos National Laboratory is conducting the work for DOE, which involves examining water and carbon-related issues arising from the development of oil shale and other unconventional fossil fuels in the western United States. Key water aspects of the study include the use of an integrated modeling process on a regional basis to assess the amounts and availability of water needed to produce unconventional fuels, water storage and withdrawal requirements, possible impacts of climate change on water availability, and water treatment and recycling options. Although a key aspect of the study is to assess water availability, researchers on the project told us that little effort will be directed at assessing groundwater, and the information developed will not result in a comprehensive understanding of the baseline conditions for water quality and quantity. Within Interior, BLM is sponsoring two oil shale projects related to water resources with federal funding totaling about $500,000. The USGS is conducting the research for both projects. For one of the projects, which is funded jointly by BLM and a number of Colorado cities and counties plus various oil shale companies, the research involves the development of a common repository for water data collected from the Piceance Basin. More specifically, the USGS has developed a Web-based repository of water quality and quantity data obtained by identifying 80 public and private databases and by analyzing and standardizing data from about half of them. According to USGS officials, many data elements are missing, and the current repository is not comprehensive. The second project, which is entirely funded by BLM, will monitor groundwater quality and quantity within the Piceance Basin in 5 existing wells and 10 more to be determined at a future date. Although USGS scientists said that this is a good start to understanding groundwater resources, it will not be enough to provide a regional understanding of groundwater resources. Federal law and regulations require the monitoring of major federal actions, such as oil shale development. Regulations developed under the National Environmental Policy Act (NEPA) for preparing an environmental impact statement (EIS), such as the EIS that will be needed to determine the impacts of future oil shale development, require the preparing agency to adopt a monitoring and enforcement program if measures are necessary to mitigate anticipated environmental impacts. Furthermore, the NEPA Task Force Report to the Council on Environmental Quality noted that monitoring must occur for long enough to determine if the predicted mitigation effects are achieved. The council noted that monitoring and consideration of potential adaptive measures to allow for midcourse corrections, without requiring new or supplemental NEPA review, will assist in accounting for unanticipated changes in environmental conditions, inaccurate predictions, or subsequent information that might affect the original environmental conditions. In September 2007, the Task Force on Strategic Unconventional Fuels—an 11-member group that included the Secretaries of DOE and Interior and the Governors of Colorado and Utah—issued a report with recommendations on promoting the development of fuels from domestic unconventional fuel resources as mandated by the Energy Policy Act of 2005. This report included recommendations and strategies for developing baseline conditions for water resources and monitoring the impacts from oil shale development. It recommended that a monitoring plan be developed and implemented to fill data gaps at large scales and over long periods of time and to also develop, model, test, and evaluate short- and long-term monitoring strategies. The report noted that systems to monitor water quality would be evaluated; additional needs would be identified; and relevant research, development, and demonstration needs would be recommended. Also in September 2007, the USGS prepared for BLM a report to improve the efficiency and effectiveness of BLM’s monitoring efforts. The report noted that regional water-resources monitoring should identify gaps in data, define baseline conditions, develop regional conceptual models, identify impacts, assess the linkage of impacts to energy development, and understand how impacts propagate. The report also noted that in the Piceance Basin, there is no local, state-level, or national comprehensive database for surface water and groundwater data. Furthermore, for purposes of developing a robust and cost-effective monitoring plan, the report stated that a compilation and analysis of available data are necessary. One of the report’s authors told us that the two BLM oil shale projects that the USGS is performing are the initial steps in implementing such a regional framework for water resource monitoring. However, the author said that much more work is needed because so much water data are missing. He noted the current data repository is not comprehensive and much more data would be needed to determine whether oil shale development will create adverse effects on water resources. Nearly all the federal agency officials, state water regulators, oil shale researchers, and water experts with whom we spoke said that more data are needed to understand the baseline condition of groundwater and surface water, so that the potential impacts of oil shale development can be monitored (see appendix I for a list of the agencies we contacted). Several officials and experts to whom we spoke stressed the need to model the movement of groundwater and its interaction with surface water to understand the possible transport of contaminants from oil shale development. They suggested that additional research would help to overcome these shortcomings. Specifically, they identified the following issues: Insufficient data for establishing comprehensive baseline conditions for surface water and groundwater quality and quantity. Of the 18 officials and experts we contacted, 17 noted that there are insufficient data to understand the current baseline conditions of water resources in the Piceance and Uintah Basins. Such baseline conditions include the existing quantity and quality of both groundwater and surface water. Hydrologists among those we interviewed explained that more data are needed on the chemistry of surface water and groundwater, properties of aquifers, age of groundwater, flow rates and patterns of groundwater, and groundwater levels in wells. Although some current research projects have and are collecting some water data, officials from the USGS, Los Alamos National Laboratory, and the universities doing this research agreed their data are not comprehensive enough to support future monitoring efforts. Furthermore, Colorado state officials told us that even though much water data were generated over time, including during the last oil shale boom, little of these data have been assimilated, gaps exist, and data need to be updated in order to support future monitoring. Insufficient research on groundwater movement and its interaction with surface water for modeling possible transport of contaminants. Sixteen of 18 officials and experts to whom we spoke noted that additional research is needed to develop a better understanding of the interactions between groundwater and surface water and of groundwater movement. Officials from NETL explained that this is necessary in order to monitor the rate and pattern of flow of possible contaminants resulting from the in- situ retorting of oil shale. They noted that none of the groundwater research currently under way is comprehensive enough to build the necessary models to understand the interaction and movement. NETL officials noted more subsurface imaging and visualization are needed to build geologic and hydrologic models and to study how quickly groundwater migrates. These tools will aid in monitoring and providing data that does not currently exist. Interior and DOE officials generally have not shared current research on water and oil shale issues. USGS officials who conduct water-related research at Interior and DOE officials at NETL, which sponsors the majority of the water and oil shale research at DOE, stated they have not talked with each other about such research in almost 3 years. USGS staff noted that although DOE is currently sponsoring most of the water-related research, USGS researchers were unaware of most of these projects. In addition, staff at Los Alamos National Laboratory who are conducting some water-related research for DOE noted that various researchers are not always aware of studies conducted by others and stated that there needs to be a better mechanism for sharing this research. Based on our review, we found there does not appear to be any formal mechanism for sharing water-related research activities and results among Interior, DOE, and state regulatory agencies in Colorado and Utah. The last general meeting to discuss oil shale research among these agencies was in October 2007, although there have been opportunities to informally share research at the annual Oil Shale Symposium, the last one of which was conducted at the Colorado School of Mines in October 2010. Of the various officials with the federal and state agencies, representatives from research organizations, and water experts we contacted, 15 of 18 noted that federal and state agencies could benefit from collaboration with each other on water-related research involving oil shale. Representatives from NETL, who are sponsoring much of the current research, stated that collaboration should occur at least every 6 months. We and others have reported that collaboration among government agencies can produce more public value than one agency acting alone. Specifically concerning water resources, we previously reported that coordination is needed to enable monitoring programs to make better use of available resources in light of organizations often being unaware of data collected by other groups. Similarly in 2004, the National Research Council concluded that coordination of water research is needed to make deliberative judgments about the allocation of funds, to minimize duplication, to present to Congress and the public a coherent strategy for federal investment, and to facilitate large-scale multiagency research efforts. In 2007, the Subcommittee on Water Availability and Quality within the Office of Science and Technology Policy, an office that advises the President and leads interagency efforts related to science and technology stated, “Given the importance of sound water management to the Nation’s well-being it is appropriate for the Federal government to play a significant role in providing information to all on the status of water resources and to provide the needed research and technology that can be used by all to make informed water management decisions.” In addition, H.R. 1145—the National Water Research and Development Initiative Act of 2009—which has passed the House of Representatives and is currently in a Senate committee, would establish a federal interagency committee to coordinate all federal water research, which totals about $700 million annually. This bill focuses on improving coordination among agency research agendas, increasing the transparency of water research budgeting, and reporting on progress toward research outcomes. The unproven nature of oil shale technologies and choices in how to generate the power necessary to develop this resource cast a shadow of uncertainty over how much water is needed to sustain a commercially viable oil shale industry. Additional uncertainty about the size of such an industry clouds the degree to which surface and groundwater resources could be impacted in the future. Furthermore, these uncertainties are compounded by a lack of knowledge of the current baseline conditions of groundwater and surface water, including their chemistry and interaction, properties of aquifers, and the age and rate of movement of groundwater, in the arid Piceance and Uintah Basins of Colorado and Utah, where water is considered one of the most precious resources. All of these uncertainties pose difficulties for oil shale developers, federal land managers, state water regulators, and current water users in their efforts to protect water resources. Attempts to commercially develop oil shale in the United States have spanned nearly a century. During this time, the industry has focused primarily on overcoming technological challenges and trying to develop a commercially viable operation. More recently, the federal government has begun to focus on studying the potential impacts of oil shale development on surface water and groundwater resources. However, these efforts are in their infancy when compared to the length of time that the industry has spent on attempting to overcome technological challenges. These nascent efforts do not adequately define current baseline conditions for water resources in the Piceance and Uintah Basins, nor have they begun to model the important interaction of groundwater and surface water in the region. Thus they currently fall short of preparing federal and state governments for monitoring the impacts of any future oil shale development. In addition, there is a lack of coordination among federal agencies on water-related research and a lack of communicating results among themselves and to the state regulatory agencies. Without such coordination and communication, federal and state agencies cannot begin to develop an understanding of the potential impacts of oil shale development on water resources and monitor progress toward shared water goals. By taking steps now, the federal government, working in concert with the states of Colorado and Utah, can position itself to help monitor western water resources should a viable oil shale industry develop in the future. To prepare for possible impacts from the future development of oil shale, we are making three recommendations to the Secretary of the Interior. Specifically, the Secretary should direct the appropriate managers in the Bureau of Land Management and the U.S. Geological Survey to 1. establish comprehensive baseline conditions for groundwater and surface water quality, including their chemistry, and quantity in the Piceance and Uintah Basins to aid in the future monitoring of impacts from oil shale development in the Green River Formation; 2. model regional groundwater movement and the interaction between groundwater and surface water, in light of aquifer properties and the age of groundwater, so as to help in understanding the transport of possible contaminants derived from the development of oil shale; and 3. coordinate with the Department of Energy and state agencies with regulatory authority over water resources in implementing these recommendations, and to provide a mechanism for water-related research collaboration and sharing of results. We provided a copy of our draft report to Interior and DOE for their review and comment. Interior provided written comments and generally concurred with our findings and recommendations. Interior highlighted several actions it has under way to begin to implement our recommendations. Specifically, Interior stated that with regard to our first recommendation to establish comprehensive baseline conditions for surface water and groundwater in the Piceance and Uintah Basins, implementation of this recommendation includes ongoing USGS efforts to analyze existing water quality data in the Piceance Basin and ongoing USGS efforts to monitor surface water quality and quantity in both basins. Interior stated that it plans to conduct more comprehensive assessments in the future. With regard to our second recommendation to model regional groundwater movement and the interaction between groundwater and surface water, Interior said BLM and USGS are working on identifying shared needs for modeling. Interior underscored the importance of modeling prior to the approval of large-scale oil shale development and cites the importance of the industry’s testing of various technologies on federal RD&D leases to determine if production can occur in commercial quantities and to develop an accurate determination of potential water uses for each technology. In support of our third recommendation to coordinate with DOE and state agencies with regulatory authority over water resources, Interior stated that BLM and USGS are working to improve such coordination and noted current efforts with state and local authorities. Interior’s comments are reproduced in appendix III. DOE also provided written comments, but did not specifically address our recommendations. Nonetheless, DOE indicated that it recognizes the need for a more comprehensive and integrated cross-industry/government approach for addressing impacts from oil shale development. However, DOE raised four areas where it suggested additional information be added to the report or took issue with our findings. First, DOE suggested that we include in our report appropriate aspects of a strategic plan drafted by an ad hoc group of industry, national laboratory, university, and government representatives organized by the DOE Office of Naval Petroleum and Oil Shale Reserves. We believe aspects of this strategic plan are already incorporated into our report. For example, the strategic plan of this ad hoc group calls for implementing recommendations of the Task Force on Strategic Unconventional Fuels, which was convened by the Secretary of Energy in response to a directive within the Energy Policy Act of 2005. The Task Force on Strategic and Unconventional fuels recommended developing baseline conditions for water resources and monitoring the impacts from oil shale development, which is consistent with our first recommendation. The ad hoc group’s report recognized the need to share information and collaborate with state and other federal agencies, which is consistent with our third recommendation. As such, we made no changes to this report in response to this comment. Second, DOE stated that we overestimated the amount of water needed for in-situ oil shale development and production. We disagree with DOE’s statement because the estimates presented in our report respond to our objective, which was to describe what is known about the amount of water that may be needed for commercial oil shale development, and they are based on existing publicly available data. We reported the entire range of reputable studies without bias to illustrate the wide range of uncertainty in water needed to commercially develop oil shale, given the current experimental nature of the process. We reported only publicly available estimates based on original research that were substantiated with a reasonable degree of documentation so that we could verify that the estimates covered the entire life cycle of oil shale development and that these estimates did not pertain solely to field demonstration projects, but were instead scalable to commercial operations. We reviewed and considered estimates from all of the companies that DOE identified in its letter. The range of water needed for commercial in-situ development of oil shale that we report ranges from 1 to 12 barrels of water per barrel of oil. These lower and upper bounds represent the sum of the most optimistic and most pessimistic estimates of water needed for all five groups of activities that we identified as comprising the life cycle of in-situ oil shale development. However, the lower estimate is based largely on estimates by ExxonMobil and incorporates the use of produced water, water treatment, and recycling, contrary to DOE’s statement that we dismissed the significance of these activities. The upper range is influenced heavily by the assumption that electricity used in retorting will come from coal-fired plants and that a maximum amount of water will be used for rinsing the retorted zones, based on modeling done at the Center for Oil Shale Technology and Research. The studies supporting these estimates were presented at the 29th Annual Oil Shale Symposium at the Colorado School of Mines. Such a range overcomes the illusion of precision that is conveyed by a single point estimate, such as the manner in which DOE cites the 1.59 barrels of water from the AMEC study, or the bias associated with reporting a narrow range based on the assumption that certain technologies will prevail before they are proven to be commercially viable for oil shale development. Consequently, we made no changes to the report in response to this comment. Third, DOE stated that using the amount of water in the White River at Meeker, Colorado, to illustrate the availability of water for commercial oil shale development understates water availability. We disagree with DOE’s characterization of our illustration. The illustration we use in the report is not meant to imply that an entire three-state industry would be limited by water availability at Meeker. Rather, the illustration explores the limitations of an in-situ oil shale industry only in the Piceance Basin. More than enough water appears available for a reasonably sized industry that depends on mining and surface retorting in the Piceance basin. Our illustration also suggests that there may be more than enough water to supply a 2.5 million barrel-per-day in-situ industry at minimum water needs, even considering the needs of current water users and the anticipated needs of future water users. In addition, the illustration suggests that there may be enough water to supply an in-situ industry in the Piceance Basin of between 1 and 2 million barrels per day at average water needs, depending upon whether all the water in the White River at Meeker is used or only water that is expected to be physically and legally available in the future. However, the illustration does point out limitations. It suggests that at maximum water needs, an in-situ industry in the Piceance Basin may not reach 1 million barrels per day if it relied solely on water in the White River at Meeker. Other sources of water may be needed, and our report notes that these other sources could include water in the Yampa or Colorado Rivers, as well as groundwater. Use of produced water and recycling could also reduce water needs as noted in the draft report. Consequently, we made no changes to the report in response to this comment. Fourth, DOE stated that the report gives the impression that all oil shale technologies are speculative and proving them to be commercially viable will be difficult, requiring a long period of time with uncertain outcomes. We disagree with this characterization of our report. Our report clearly states that there is uncertainty regarding the commercial viability of in-situ technologies. Based on our discussions with companies and review of available studies, Shell is the only active oil shale company to have successfully produced shale oil from a true in-situ process. Considering the uncertainty associated with impacts on groundwater resources and reclamation of the retorted zone, commercialization of an in-situ process is likely to be a number of years away. To this end, Shell has leased federal lands from BLM to test its technologies, and more will be known once this testing is completed. With regard to mining oil shale and retorting it at the surface, we agree that it is a relatively mature process. Nonetheless, competition from conventional crude oil has inhibited commercial oil shale development in the United States for almost 100 years. Should some of the companies that DOE mentions in its letter prove to be able to produce oil shale profitably and in an environmentally sensitive manner, they will be among the first to overcome such long-standing challenges. We are neither dismissing these companies, as DOE suggests, nor touting their progress. In addition, it was beyond the scope of our report to portray the timing of commercial oil shale production or describe a more exhaustive history of oil shale research, as DOE had recommended, because much research currently is privately funded and proprietary. Therefore, we made no changes to the report in response to this comment. DOE’s comments are reproduced in appendix IV. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, Secretaries of the Interior and Energy, Directors of the Bureau of Land Management and U.S. Geological Survey, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact one of us at (202) 512-3841 or gaffiganm@gao.gov or mittala@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. To determine what is known about the potential impacts to groundwater and surface water from commercial oil shale development, we reviewed the Proposed Oil Shale and Tar Sands Resource Management Plan Amendments to Address Land Use Allocations in Colorado, Utah, and Wyoming and Final Programmatic Environmental Impact Statement (PEIS) prepared by the Bureau of Land Management in September 2008. We also reviewed environmental assessments prepared on Shell Oil’s plans for in-situ development of its research, demonstration, and development (RD&D) tracts in Colorado and on the Oil Shale Exploration Company’s (OSEC) plan to mine oil shale on its RD&D tract in Utah because these two companies have made the most progress toward developing in-situ and mining technologies, respectively. In addition, we reviewed the Office of Technology Assessment’s (OTA) 1980 report, An Assessment of Oil Shale Technologies; the Rand Corporation’s 2005 report, Oil Shale Development in the United States; and the Argonne National Laboratory’s 2005 report, Potential Ground Water and Surface Water Impacts from Oil Shale and Tar Sands Energy-Production Operations. Because the PEIS was the most comprehensive of these documents, we summarized impacts to groundwater and surface water quantity and quality described within this document and noted that these impacts were entirely qualitative in nature and that the magnitude of impacts was indeterminate because the in-situ technologies have yet to be developed. To confirm these observations and the completeness of impacts within the PEIS, we contacted the Environmental Protection Agency, the Colorado Division of Water Resources, the Colorado Water Conservation Board, the Division of Water Quality within the Colorado Department of Public Health and Environment, the Utah Division of Water Resources, the Utah Division of Water Quality, and the Utah Division of Water Rights—all of which have regulatory authority over some aspect of water resources. To ensure that we identified the range of views on the potential impacts of oil shale development on groundwater and surface water, we also contacted the U.S. Geological Survey (USGS), the Colorado Geological Survey, the Utah Geological Survey, industry representatives, water experts, and numerous environmental groups for their views on the impacts of oil shale on water resources. To assess the impacts of oil shale development on aquatic resources, we reviewed the PEIS and contacted the Colorado Division of Wildlife and the Utah Division of Wildlife Resources. To determine what is known about the amount of water that may be needed for commercial oil shale development, we searched the Internet and relevant databases of periodicals using the words “oil shale” together with “water use.” We also searched Web sites maintained by the Bureau of Land Management (BLM), USGS, and the Department of Energy (DOE) for information on oil shale and water use and interviewed officials at these agencies to determine if there were additional studies that we had not identified. We also checked references cited within the studies for other studies. We limited the studies to those published in 1980 or after because experts with whom we consulted either considered the studies published before then to be adequately summarized in OTA’s 1980 report or to be too old to be relevant. We included certain data within the OTA report because some of the surface retort technologies are similar to technologies being tested today. We did not consider verbal estimates of water needs unless companies could provide more detailed information. The 17 studies that we identified appear in table 7. For further analysis, we divided the studies into two major groups—in-situ extraction and mining with a surface retort. We dismissed a combination of mining and in-situ extraction because most of these technologies are more than 30 years old and generally considered to be infeasible today. The single company that is pursuing such a combination of technologies today—Red Leaf Resources— has not published detailed data on water needs. After reviewing these studies, we found that most of the studies did not examine water needs for the entire life cycle of oil shale development. As such, we identified logical groups of activities based on descriptions within the studies. We identified the following five groups of activities: (1) extraction and retorting, (2) generating power, (3) upgrading shale oil, (4) reclamation, and (5) population growth associated with oil shale development. We did not include refining because we believe it is unlikely that oil shale production will reach levels in the near- or midterm to justify building a new refinery. To characterize the water needs for the entire life cycle of oil shale development, we identified within each study the water needs for each of the five groups of activities. Except for OTA’s 1980 report, which is now 30 years old, we contacted the authors of each study and discussed the estimates with them. If estimates within these studies were given for more than one group of activities, we asked them to break down this estimate into the individual groups when possible. We only considered further analyzing water needs for groups of activities that were based on original research so as not to count these estimates multiple times. For example, original research on water needs for extraction and retorting may have analyzed mine plans, estimated water needs for drilling wells, estimated water needs for dust control, and discussed recycling of produced water. Original research on water needs for population growth may have discussed the number of workers immigrating to a region, their family size, per capita water consumption, and the nature of housing required by workers. On the other hand, estimates of water needs that were not based on original research generally reported water needs for multiple groups of activities in barrels of water per barrel of oil produced and cited someone else’s work as the source for this number. We excluded several estimates that seemed unlikely. For example, we eliminated a water estimate for power generation that included building a nuclear power plant and water estimates for population growth where it was assumed that people would decrease their water consumption by over 50 percent. We also excluded technologies developed prior to 1980 that are dissimilar to technologies being considered by oil shale companies today. We checked mathematical calculations and reviewed power requirements and the reasonableness of associated water needs. For power estimates that did not include associated water needs, we converted power needs into water needs using 480 gallons per megawatt hour of electricity produced by coal-fired, wet recirculating thermoelectric plants and 180 gallons per megawatt hour of electricity produced by gas-powered, combined cycle, wet recirculating thermoelectric plants. Air-cooled systems consume almost no water for cooling. Where appropriate, we also estimated shale oil recoveries based the company’s estimated oil shale resources and estimated water needs for rinsing retorted zones based on anticipated changes to the reservoir. We converted water requirements to barrels of water needed per barrel of oil produced. For those studies with water needs that met our criteria, we tabulated water needs for each group of activities for both in-situ production and mining with a surface retort. The results appear in tables 8 and 9. We estimated the total range of water needs for in-situ development by summing the minimum estimates for each group of activities and by summing the maximum estimates for the various groups of activities. We did the same for mining with a surface retort. We also calculated the average water needs for each group of activities. To determine the extent to which water is likely to be available for commercial oil shale development and its source, we compared the total needs of an oil shale industry of various sizes to the amount of surface water and groundwater that the states of Colorado and Utah estimate to be physically and legally available, in light of future municipal and industrial demand. We selected the sizes of an oil shale industry based on input from industry and DOE. These are hypothetical sizes, and we do not imply that an oil shale industry will grow to these sizes. The smallest size we selected for an in-situ industry, 500,000 barrels of oil per day, is a likely size identified by an oil shale company based on experience with the development of the Canadian tar sands. The largest size of 2,500,000 barrels of oil per day is based on DOE projections. We based our smallest size of a mining industry, 25,000 barrels of oil per day, on one-half of the smallest scenario identified by URS in their work on water needs contracted by the state of Colorado. We based our largest size of a mining industry, 150,000 barrels of oil per day, on three projects each of 50,000 barrels of oil per day, which is a commonly cited size for a commercial oil shale mining operation. We reviewed and analyzed two detailed water studies commissioned by the state of Colorado to determine how much water is available in Colorado, where it was available, and to what extent demands will be placed on this water in the future. We also reviewed a report prepared for the Colorado Water Conservation Board on future water availability in the Colorado River. These studies were identified by water experts at various Colorado state water agencies as the most updated information on Colorado’s water supply and demand. To determine the available water supply and the potential future demand in the Uintah Basin, we reviewed and analyzed data in documents prepared by the Utah Division of Water Resources. We also examined data on water rights provided by the Utah Division of Water Rights and examined data collected by Western Resource Advocates on oil shale water rights in Colorado. In addition to reviewing these documents, we interviewed water experts at the Bureau of Reclamation, USGS, Utah Division of Water Rights, Utah Division of Water Resources, Utah Division of Water Quality, Colorado Division of Natural Resources, Colorado Division of Water Resources, Colorado River Water Conservation District, the Utah and Colorado State Demographers, and municipal officials in the oil shale resource area. To identify federally funded research efforts to address the impacts of commercial oil shale development on water resources, we interviewed officials and reviewed information from offices or agencies within DOE and the Department of the Interior (Interior). Within DOE, these offices were the Office of Naval Petroleum and Oil Shale Reserves, the National Energy Technology Laboratory, and other DOE offices with jurisdiction over various national laboratories. Officials at these offices identified the Idaho National Laboratory and the Los Alamos National Laboratory as sponsoring or performing water-related oil shale research. In addition, they identified experts at Argonne National Laboratory who worked on the PEIS for BLM or who wrote reports on water and oil shale issues. Within Interior, we contacted officials with BLM and the USGS. We asked officials at all of the federal agencies and offices that were sponsoring federal research to provide details on research that was water-related and to provide costs for the water-related portions of these research projects. For some projects, based on the nature of the research, we counted the entire award as water-related. We identified 15 water-related oil shale research projects. A detailed description of these projects is in appendix II. To obtain additional details on the work performed under these research projects, we interviewed officials with all the sponsoring organizations and the performing organizations, including the Colorado School of Mines, University of Utah, Utah Geological Survey, Idaho National Laboratory, Los Alamos National Laboratory, Argonne National Laboratory, and the USGS. To assess additional needs for research and to evaluate any gaps between research needs and the current research projects, we interviewed officials with 14 organizations and four experts that are authors of studies or reports we used in our analyses and that are recognized as having extensive knowledge of oil shale and water issues. The names of the 14 organizations appear in table 10. These discussions involved officials with all the federal offices either sponsoring or performing water-related oil shale research and state agencies involved in regulating water resources. Appendix II: Descriptions of Federally Funded Water-Related Oil Shale Research $4,838,097 The University of Utah received four separate awards, each covering a broad array of oil shale research over multiple years. The awards included some water-related work. Examples of projects include (1) Meeting Data Needs to Perform a Water Impact Assessment for Oil Shale Development in the Uintah and Piceance Basins, (2) Effect of Oil Shale Processing on Water Compositions, and () New Approaches to Treat Produced Water and Perform Water Availability Impact Assessments for Oil Shale Development. In addition to the individuals named above, Dan Haas (Assistant Director), Ron Belak, Laura Hook, and Randy Jones made major contributions to this report. Other individuals who made significant contributions were Charles Bausell, Virginia Chanley, Alison O’Neill, Madhav Panwar, and Barbara Timmerman.
Oil shale deposits in Colorado, Utah, and Wyoming are estimated to contain up to 3 trillion barrels of oil--or an amount equal to the world's proven oil reserves. About 72 percent of this oil shale is located beneath federal lands, making the federal government a key player in its potential development. Extracting this oil is expected to require substantial amounts of water and could impact groundwater and surface water. GAO was asked to report on (1) what is known about the potential impacts of oil shale development on surface water and groundwater, (2) what is known about the amount of water that may be needed for commercial oil shale development, (3) the extent to which water will likely be available for commercial oil shale development and its source, and (4) federal research efforts to address impacts to water resources from commercial oil shale development. GAO examined environmental impacts and water needs studies and talked to Department of Energy (DOE), Department of the Interior (Interior), and industry officials. Oil shale development could have significant impacts on the quality and quantity of water resources, but the magnitude of these impacts is unknown because technologies are years from being commercially proven, the size of a future oil shale industry is uncertain, and knowledge of current water conditions and groundwater flow is limited. In the absence of effective mitigation measures, water resources could be impacted from ground disturbances caused by the construction of roads and production facilities; withdrawing water from streams and aquifers for oil shale operations, underground mining and extraction; and discharging waters produced from or used in operations. Estimates vary widely for the amount of water needed to commercially produce oil shale primarily because of the unproven nature of some technologies and because the various ways of generating power for operations use differing quantities of water. GAO's review of available studies indicated that the expected total water needs for the entire life cycle of oil shale production ranges from about 1 barrel (or 42 gallons) to 12 barrels of water per barrel of oil produced from in-situ (underground heating) operations, with an average of about 5 barrels, and from about 2 to 4 barrels of water per barrel of oil produced from mining operations with surface heating. Water is likely to be available for the initial development of an oil shale industry, but the size of an industry in Colorado or Utah may eventually be limited by water availability. Water limitations may arise from increases in water demand from municipal and industrial users, the potential of reduced water supplies from a warming climate, fulfilling obligations under interstate water compacts, and the need to provide additional water to protect threatened and endangered fishes. The federal government sponsors research on the impacts of oil shale on water resources through DOE and Interior. DOE manages 13 projects whose water-related costs total about $4.3 million, and Interior sponsored two water-related projects, totaling about $500,000. Despite this research, nearly all of the officials and experts that GAO contacted said that there are insufficient data to understand baseline conditions of water resources in the oil shale regions of Colorado and Utah and that additional research is needed to understand the movement of groundwater and its interaction with surface water. Federal agency officials also said they seldom coordinate water-related oil shale research among themselves or with state agencies that regulate water. Most officials noted that agencies could benefit from such coordination. GAO recommends that Interior establish comprehensive baseline conditions for water resources in oil shale regions of Colorado and Utah, model regional groundwater movement, and coordinate on water-related research with DOE and state agencies involved in water regulation. Interior generally concurred with GAO's recommendations.
The federal government began with a public debt of about $78 million in 1789. Since then, the Congress has attempted to control the size of the debt by imposing ceilings on the amount of Treasury securities that could be outstanding. In February 1941, the Congress set an overall ceiling of $65 billion on all types of Treasury securities that could be outstanding at any one time. This ceiling was raised several times between February 1941 and June 1946 when a ceiling of $275 billion was set and remained in effect until August 1954. At that time, the Congress imposed the first temporary debt ceiling which added $6 billion to the $275 billion permanent ceiling. Since that time, the Congress has enacted numerous temporary and permanent increases in the debt ceiling. Although most of this debt is held by the public, about one fourth of it or $1.325 trillion, as of October 31, 1995, is issued to federal trust funds, such as the Social Security funds, the Civil Service fund, and the G-Fund. The Secretary of the Treasury has several responsibilities relating to the federal government’s financial management operations. These include paying the government’s obligations and investing trust fund receipts not needed for current benefits and expenses. The Congress has generally provided the Secretary with the ability to issue the necessary securities to the trust funds for investment purposes and to borrow the necessary funds from the public to pay government obligations. Under normal circumstances, the debt ceiling is not an impediment in carrying out these responsibilities. Treasury is notified by the appropriate agency (such as the Office of Personnel Management for the Civil Service fund) of the amount that should be invested (or reinvested) and Treasury makes the investment. In some cases, the actual security that Treasury should purchase may also be specified. These securities count against the debt ceiling. Consequently, if trust fund receipts are not invested, an increase in the debt subject to the debt ceiling does not occur. When Treasury is unable to borrow as a result of reaching the debt ceiling, the Secretary is unable to fully discharge his financial management responsibilities using the normal methods. On various occasions over the years, normal government financing has been disrupted because Treasury had borrowed up to or near the debt ceiling and legislation to increase the debt ceiling had not yet been enacted. These situations are commonly referred to as debt ceiling crises. In 1985 the government experienced a debt ceiling crisis from September 3 through December 11. During that period, Treasury took several actions that were similar to those discussed in this report. For example, Treasury redeemed Treasury securities held by the Civil Service fund earlier than normal in order to borrow sufficient cash from the public to meet the fund’s benefit payments and did not invest some trust fund receipts. In 1986 and 1987, following Treasury’s experiences during prior debt ceiling crises, the Congress provided to the Secretary of the Treasury statutory authority to use the Civil Service fund and the G-Fund to assist Treasury in managing its financial operations during a debt ceiling crisis. The following are statutory authorities provided to the Secretary of Treasury that are pertinent to the 1995-1996 debt ceiling crisis and the actions discussed in this report. 1. Redemption of securities held by the Civil Service fund. In subsection (k) of 5 U.S.C. 8348, the Congress authorizes the Secretary of the Treasury to redeem securities or other invested assets of the Civil Service fund before maturity to prevent the amount of public debt from exceeding the debt ceiling. 5 U.S.C. 8348(k) also provides that, before exercising the authority to redeem securities of the Civil Service fund, the Secretary must first determine that a “debt issuance suspension period” exists. 5 U.S.C. 8348(j) also defines a debt issuance suspension period as any period for which the Secretary has determined that obligations of the United States may not be issued without exceeding the debt ceiling. “the term ‘debt issuance suspension period’ means any period for which the Secretary of the Treasury determines for purposes of this subsection that the issuance of obligations of the United States may not be made without exceeding the public debt limit.” 2. Suspension of Civil Service fund investments. In subsection (j) of 5 U.S.C. 8348, the Congress authorizes the Secretary of the Treasury to suspend additional investment of amounts in the Civil Service fund if such investment cannot be made without causing the amount of public debt to exceed the debt ceiling. This subsection of the statute instructs the Secretary on how to make the Civil Service fund whole after the debt issuance suspension period has ended. 3. Suspension of G-Fund investments. In subsection (g) of 5 U.S.C. 8438, the Congress authorized the Secretary of the Treasury to suspend the issuance of additional amounts of obligations of the United States to the G-Fund if such issuance cannot be made without causing the amount of public debt to exceed the debt ceiling. The subsection contains instructions on how the Secretary is to make the G-Fund whole after the debt ceiling crisis has ended. 4. Issuance of securities not counted toward the debt ceiling. On February 8, 1996, the Congress provided Treasury with the authority (Public Law 104-103) to issue securities in an amount equal to March 1996 social security payments. This statute provided that the securities issued under its provisions were not to be counted against the debt ceiling until March 15, 1996, which was later extended to March 30, 1996. On March 12, 1996, the Congress enacted Public Law 104-115 which exempted government trust fund investments and reinvestments from the debt ceiling until March 30, 1996. We have previously reported on aspects of Treasury’s actions during the 1985 and other debt ceiling crises. Those reports are: 1. A New Approach to the Public Debt Legislation Should Be Considered (FGMSD-79-58, September 7, 1979). 2. Opinion on the legality of the plan of the Secretary of the Treasury to disinvest the Social Security and other trust funds on November 1, 1985, to permit payments to beneficiaries of these funds (B-221077.2, December 5, 1985). 3. Civil Service Fund: Improved Controls Needed Over Investments (GAO/AFMD-87-17, May 7, 1987). 4. Debt Ceiling Options (GAO/AIMD-96-20R, December 7, 1995). 5. Social Security Trust Funds (GAO/AIMD-96-30R, December 12, 1995). 6. Debt Ceiling Limitations and Treasury Actions (GAO/AIMD-96-38R, January 26, 1996). 7. Information on Debt Ceiling Limitations and Increases (GAO/AIMD-96-49R, February 23, 1996). • develop a chronology of significant events relating to the 1995-1996 debt • evaluate the actions taken during the 1995-1996 debt ceiling crisis in relation to the normal policies and procedures Treasury uses for federal trust fund investments and redemptions, and • analyze the financial aspects of the departures from the normal policies and procedures and assess their legal basis. To develop a chronology of the significant events involving the 1995-1996 debt ceiling crisis, we obtained and reviewed applicable documents. We also discussed Treasury’s actions during the crisis with Treasury officials. To evaluate the actions taken during the 1995-1996 debt ceiling crisis in relation to the normal policies and procedures Treasury uses for federal trust fund investments, we obtained an overview of the procedures used. For the 15 selected trust funds, which are identified in chapter 3, we examined the significant transactions that affected the trust funds between November 1, 1995, and March 31, 1996. In cases where the procedures were not followed, we obtained documentation and other information to help understand the basis and impact of the alternative procedures that were used. Although Treasury maintains accounts for over 150 different trust funds, we selected for review those with investments in Treasury securities that exceeded $8 billion on November 1, 1995. In addition, we selected the Exchange Stabilization Fund because Treasury used this fund in previous debt ceiling crises to help raise cash and stay under the debt ceiling. The funds we examined accounted for over 93 percent of the total securities held by these 150 trust funds as of October 31, 1995, and March 31, 1996. To analyze the financial aspects of Treasury’s departures from its normal polices and procedures, we (1) reviewed the methodologies Treasury developed to minimize the impact of such departures on the federal trust funds, (2) quantified the impact of the departures, and (3) assessed whether any interest losses were properly restored. To assess the legal basis of Treasury’s departures from its normal policies and procedures, we identified the applicable legal authorities and determined how Treasury applied them during the debt ceiling crisis. Our evaluation included those authorities relating to (1) issuing and redeeming Treasury securities during a debt issuance suspension period and restoring losses after a debt ceiling crisis has ended, (2) the ability to exchange Treasury securities held by the Civil Service fund for agency securities held by the FFB, and (3) the use of the Exchange Stabilization Fund during a debt ceiling crisis. We also compiled and analyzed applicable source documents, including executive branch legal opinions, memos, and correspondence. We have provided these documents to the Committees’ staffs. We performed our work between November 9, 1995, and July 1, 1996. Our audit was performed in accordance with generally accepted government auditing standards. We requested oral comments on a draft of this report from the Secretary of the Treasury or his designee. On August 22, 1996, Treasury officials provided us with oral comments that generally agreed with our findings and conclusions. Their views have been incorporated where appropriate. On August 10, 1993, the Congress raised the debt ceiling to $4.9 trillion, which was expected to fund government operations until spring 1995. In early 1995, analysts concluded that the debt ceiling would be reached in October 1995. This set the stage for the 1995-1996 debt ceiling crisis, which was resolved on March 29, 1996, when Congress raised the debt ceiling to $5.5 trillion. The major actions taken by the Congress and the Executive Branch involving the 1995-1996 debt ceiling crisis are shown in table 2.1. Our analysis showed that, during the 1995-1996 debt ceiling crisis, Treasury used its normal investment and redemption procedures to handle the receipts and maturing investments and to redeem Treasury securities for 12 of the 15 trust funds we examined. These 12 trust funds accounted for about 65 percent, or about $871 billion, of the $1.3 trillion in Treasury securities held by the federal trust funds on October 31, 1995. The trust funds included in our analysis are listed in table 3.1. Trust funds which are allowed to invest receipts, such as the Social Security funds, normally invest them in nonmarketable Treasury securities. Under normal conditions, Treasury is notified by the appropriate agency of the amount that should be invested or reinvested, and Treasury then makes the investment. In some cases, the actual security that Treasury should purchase is also specified. When a trust fund needs to pay benefits and expenses, Treasury is normally notified of the amount and the date that the disbursement is to be made. Depending on the fund, Treasury may also be notified to redeem specific securities. Based on this information, Treasury redeems a fund’s securities. Between November 15, 1995, and March 28, 1996, Treasury followed its normal investment and redemption policies for all of the trust funds shown in table 3.1. For example, during this period, Treasury invested about $156.7 billion and redeemed about $115.8 billion of Treasury securities on behalf of the Social Security funds and invested about $7.1 billion and redeemed about $6.8 billion of Treasury securities on behalf of the Military Retirement Fund. The departures from normal investment and redemption procedures involving the other three trust funds (Civil Service fund, G-Fund, and Exchange Stabilization Fund), which held over $370 billion of Treasury securities on October 31, 1995, or about 28 percent of the Treasury securities held by all federal trust funds at that time, are discussed in chapters 4 and 5. During the 1995-1996 debt ceiling crisis, the Secretary of the Treasury redeemed Treasury securities held by the Civil Service fund and suspended the investment of some Civil Service fund receipts. Also, Treasury exchanged Treasury securities held by the Civil Service fund for non-Treasury securities held by the FFB. Subsection (k) of 5 U.S.C. 8348 authorizes the Secretary of the Treasury to redeem securities or other invested assets of the Civil Service fund before maturity to prevent the amount of public debt from exceeding the debt ceiling. The statute does not require that early redemptions be made only for the purpose of making Civil Service fund benefit payments. Furthermore, the statute permits the early redemptions even if the Civil Service fund has adequate cash balances to cover these payments. During November 1995 and February 1996 the Secretary of the Treasury redeemed about $46 billion of the Civil Service fund’s Treasury securities before they were needed to pay for trust fund benefits and expenses. Table 4.1 shows an example of the use of this procedure during the 1995-1996 debt ceiling crisis. Before redeeming Civil Service fund securities earlier than normal, the Secretary must first determine that a “debt issuance suspension period” exists. Such a period is defined as any period for which the Secretary has determined that obligations of the United States may not be issued without exceeding the debt ceiling. The statute authorizing the debt issuance suspension period and its legislative history are silent as to how to determine the length of a debt issuance suspension period. On November 15, 1995, the Secretary declared a 12-month debt issuance suspension period. On February 14, 1996, the Secretary extended this period from 12 to 14 months. The Secretary, in the November 15, 1995, determination, stated that a debt issuance suspension period existed for a period of 12 months “ased on the information that is available to me today.” A memorandum to the Secretary from Treasury’s General Counsel provided the Secretary a rationale to support his determination. The memorandum noted that based on the actions of the Congress and the President and on public statements by both these parties, there was a significant impasse that made it unlikely that a statute raising the debt ceiling could be enacted. Furthermore, the positions of the President and the Congress were so firm that it seemed unlikely that an agreement could be reached before the next election, which was 12 months away. The Secretary extended the debt issuance suspension period by 2 months on February 14, 1996. Treasury’s General Counsel again advised the Secretary concerning the reasons underlying the extension and noted that nothing had changed since November to indicate that the impasse was any closer to being resolved. The General Counsel further reasoned that it would take until January 1997 for a newly elected President or a new Congress to be able to enact legislation raising the debt ceiling. On November 15, 1995, the Secretary authorized the redemption of $39.8 billion of the Civil Service fund’s Treasury securities, and on February 14, 1996, authorized the redemption of another $6.4 billion of the fund’s Treasury securities. The total, $46 billion of authorized redemptions was determined based on (1) the 14-month debt issuance suspension period determination made by the Secretary (November 15, 1995, through January 15, 1997) and (2) the estimated monthly Civil Service fund benefit payments. Treasury considered appropriate factors in determining the amount of Treasury securities to redeem early. About $39.8 billion of these securities were redeemed between November 15 and 30, 1995. Then, in December 1995, Treasury’s cash position improved for a few days, primarily because of the receipt of quarterly estimated tax payments due in December. This inflow of cash enabled Treasury to reinvest, in late December 1995, about $21.2 billion in securities that had the same terms and conditions as those that were redeemed in November. However, because of Treasury’s deteriorating cash position, these securities were again redeemed by the end of December. Finally, between February 15 and 20, 1996, an additional $6.4 billion in Treasury securities held by the Civil Service fund were redeemed. Subsection (j) of 5 U.S.C. 8348 authorizes the Secretary of the Treasury to suspend additional investment of amounts in the Civil Service fund if such investment cannot be made without causing the amount of public debt to exceed the debt ceiling. Between November 15, 1995, and March 29, 1996, the Civil Service fund had about $20 billion in receipts. In all but one case, Treasury used its normal investment policies to handle the trust fund’s requests to invest these receipts. The exception involved the trust fund’s December 31, 1995, receipt from Treasury of a $14 billion semiannual interest payment on the fund’s securities portfolio. The Secretary determined that investing these funds in additional Treasury securities would have caused the public debt to exceed the debt ceiling and, therefore, suspended the investment of these receipts. During the debt ceiling crisis, about $6.3 billion of the Civil Service fund’s uninvested receipts were used to pay for the trust fund’s benefits and expenses. Normally, government trust funds that are authorized to invest in Treasury securities do not have uninvested cash—all of a trust fund’s receipts that are not needed to pay for benefits and expenses are invested. In the case of the Civil Service fund, when a redemption is necessary, Treasury’s stated policy is to redeem the securities with the shortest maturity first. Should a group of securities have the same maturity date, but different interest rates, the securities with the lowest interest rate are redeemed first. During previous debt ceiling crises, Treasury’s actions resulted in uninvested cash. The uninvested cash not only required restoring lost investment interest but also affected the normal method Treasury uses to determine securities to redeem to pay for trust fund benefits and expenses. Accordingly, in 1989, Treasury developed policies and procedures for determining when uninvested trust fund cash should be used to pay trust fund benefits and expenses and used these policies during the 1995-1996 debt ceiling crisis. Overall, Treasury’s policy continued to be to redeem the securities with the lowest interest rate first. However, in making this determination, uninvested cash is treated as though it had been invested in Treasury securities. These procedures are presented in table 4.2. The following illustrates how this policy was implemented. On January 2, 1996, Treasury needed about $2.6 billion to pay fund benefits and expenses for the Civil Service fund. To make these payments, it redeemed or used • $43 million of the fund’s Treasury securities which carried an interest rate of 5-7/8 percent and matured on June 30, 1996; • $815 million of the fund’s Treasury securities which carried an interest rate of 6 percent and matured on June 30, 1996 (these securities were redeemed first since the $815 million had been invested prior to December 31, 1995); and • $1.7 billion of uninvested cash since the uninvested cash, if normal procedures had been followed, would have been invested on December 31, 1995, in 6 percent securities maturing on June 30, 1996. On February 14, 1996, about $8.6 billion in Treasury securities held by the Civil Service fund were exchanged for agency securities held by FFB. FFBused the Treasury securities it received in this exchange to repay some of its borrowings from Treasury. Since the Treasury securities provided by the Civil Service fund had counted against the debt ceiling, reducing these borrowings resulted in a corresponding reduction in the public debt subject to the debt ceiling. Thus, Treasury could borrow additional cash from the public. The decision to exchange Treasury securities held by the Civil Service fund for non-Treasury securities held by FFB required Treasury to determine (1) which non-Treasury securities were eligible for the exchange and (2) how to value the securities so that the exchange was fair to both the Civil Service fund and FFB. Treasury’s objective was to ensure that the securities that were exchanged were of equal value and that the Civil Service fund would not incur any long-term loss. Regarding the first issue, the law governing the Civil Service fund does not specifically identify which securities issued by an agency can be purchased. However, the laws authorizing the Postal Service and the Tennessee Valley Authority to issue securities state that these securities are lawful investments of the federal trust funds (39 U.S.C. 2005(d)(3) and 16 U.S.C. 831n-4(d)). Regarding the second issue, the Treasury securities held by the Civil Service fund and the non-Treasury securities held by FFB had different terms and conditions; thus complicating the task of valuing the securities. For example, most of the Treasury securities held by the Civil Service fund mature on June 30 of a given year and can be redeemed at par when needed to pay benefits and expenses. None of the agency securities held by FFB, and selected by Treasury for the exchange transaction, matured on June 30 and, if redeemed before maturity, the redemption price would be based on market interest rates. Because the effects of these differences can be significant, a methodology was needed to determine the proper valuation for the securities that would be exchanged. Therefore, Treasury used a generally accepted methodology to compute the value of each portfolio. Examples of factors used in this methodology include (1) the current market rates for outstanding Treasury securities at the time of the exchange, (2) the probability of changing interest rates, (3) the probability of the agency paying off the debt early, and (4) the premium that the market would provide to a security that could be redeemed at par regardless of the market interest rates. Treasury obtained the opinion of an independent third party to determine whether its valuations were accurate. Our review of the consultant’s report showed that the consultant (1) identified the characteristics of each security to be exchanged, (2) reviewed the pricing methodology to be used, (3) calculated the value of each security based on the pricing methodology, and (4) reviewed the terms and conditions of the exchange agreement. The consultant concluded that the exchange was fair. Due to the complexity of the consultant’s computations and the large number of securities exchanged, we did not independently verify the consultant’s conclusion. The factors included in Treasury’s methodology and the consultant’s analysis were appropriate for assessing the exchange. Treasury’s actions during the 1995-1996 debt ceiling crisis involving the Civil Service fund were in accordance with statutory authority provided by the Congress and the administrative policies and procedures established by Treasury. These actions helped the government to avoid default on its obligations and to stay within the debt ceiling. Specifically, we conclude the following: • Based on the information available to the Secretary when the November 15, 1995, and February 14, 1996, debt issuance suspension period determinations were made, the Secretary’s determinations were not unreasonable. • Treasury considered appropriate factors in determining the amount of Treasury securities to redeem early. • The Secretary acted within the authorities provided by law when suspending the investment of Civil Service fund receipts. • Treasury’s policies and procedures regarding the uninvested funds are designed primarily to facilitate the restoration of fund losses when Treasury does not follow its normal investment and redemption policies and procedures. They also provide an adequate basis for considering the uninvested receipts in determining the securities to be redeemed to pay Civil Service fund benefits and expenses during the debt ceiling crisis. • The agency securities used in the exchange between the Civil Service fund and FFB were lawful investments for the Civil Service fund. In addition, by having an independent verification of the value of the exchanged securities, Treasury helped to ensure that both the Civil Service fund and FFB were treated equitably in the exchange. In addition to the actions involving the Civil Service fund, during the 1995-1996 debt ceiling crisis, the Secretary of the Treasury (1) suspended the investment of G-Fund receipts and (2) did not reinvest some of the Exchange Stabilization Fund’s maturing securities. Also, the Congress authorized Treasury to issue selected securities that were temporarily exempted from being counted against the debt ceiling. These actions also assisted Treasury in staying under the debt ceiling. Subsection (g) of 5 U.S.C. 8438 authorizes the Secretary of the Treasury to suspend the issuance of additional amounts of obligations of the United States to the G-Fund if such issuance cannot be made without causing the amount of public debt to exceed the debt ceiling. Each day, between November 15, 1995, and March 18, 1996, Treasury determined the amount of funds that the G-Fund would be allowed to invest in Treasury securities and suspended the investment of G-Fund receipts that would have resulted in exceeding the debt ceiling. On November 15, 1995, when the Secretary determined a debt issuance suspension period, the G-Fund held about $21.6 billion of Treasury securities maturing on that day. In order to meet its cash needs, Treasury did not reinvest about $18 billion of these securities. Until March 19, 1996, the amount of the G-Fund’s receipts that Treasury invested changed daily depending on the amount of the government’s outstanding debt. Although Treasury can accurately predict the result of some of these factors affecting the outstanding debt, the result of others cannot be precisely determined until they occur. For example, the amount of securities that Treasury will issue to the public from an auction can be determined some days in advance because Treasury can control the amount that will actually be issued. On the other hand, the amount of savings bonds that will be issued and of securities that will be issued to, or redeemed by, various government trust funds are difficult to predict. Because of these difficulties, Treasury needed a way to ensure that the government’s trust fund activities did not cause the debt ceiling to be exceeded and also to maintain normal trust fund investment and redemption policies. To do this, each day during the debt ceiling crisis, Treasury • calculated the amount of public debt subject to the debt ceiling, excluding the funds that the G-Fund would normally invest; • determined the amount of G-Fund receipts that could safely be invested without exceeding the debt ceiling and invested this amount in Treasury securities; and • suspended investment of the G-Fund’s remaining funds. For example, on January 17, 1996, excluding G-Fund transactions, Treasury issued about $17 billion and redeemed about $11.4 billion of securities that counted against the debt ceiling. Since Treasury had been at the debt ceiling the previous day, Treasury could not invest the entire amount ($21.8 billion) that the G-Fund had requested without exceeding the debt ceiling. As a result, the $5.6 billion difference was added to the amount of uninvested G-Fund receipts and raised the amount of uninvested funds for the G-Fund to $7.2 billion on that date. Interest on the uninvested funds was not paid until the debt ceiling crisis ended. On several occasions between February 21 and March 12, 1996, Treasury did not reinvest some of the maturing securities held by the Exchange Stabilization Fund. Because the Fund’s securities are considered part of the government’s outstanding debt subject to the debt ceiling, when the Secretary does not reinvest the Fund’s maturing securities, the government’s outstanding debt is reduced. The purpose of the Exchange Stabilization Fund is to help provide a stable system of monetary exchange rates. The law establishing the Fund authorizes the Secretary to invest Fund balances not needed for program purposes in obligations of the federal government. This law also gives the Secretary the sole discretion for determining when, and if, the excess funds will be invested. During previous debt ceiling crises, Treasury exercised the option of not reinvesting the Fund’s maturing Treasury securities, which enabled Treasury to raise additional cash and helped the government stay within the debt ceiling limitation. In other actions to stay within the debt ceiling, the Congress passed legislation allowing Treasury to issue some Treasury securities that were temporarily exempted from being counted against the debt ceiling. During January 1996, Treasury’s cash position continued to deteriorate. The Secretary notified the Congress that, unless the debt ceiling was raised before the end of February 1996, Social Security and other benefit payments could not be made in March 1996. Under normal procedures, monthly Social Security benefits are paid by direct deposit on the third day of each month. Because checks take a period of time to clear, Treasury only redeems securities equal to the amount of benefits paid by direct deposit on this date. The securities necessary to pay the benefits made by check are redeemed on the third and fourth business day after the payments are made. This sequencing is designed to allow the fund to earn interest during the average period that benefit checks are outstanding but not cashed (the so-called “float period”). For Social Security payments, the check float period is about 3.6 days. According to Treasury officials, they may need to raise the actual cash needed to pay these benefits several days before the payments are made since the check float is an average. For example, some checks may clear the next business day while others may clear several days after the securities are redeemed. Under normal conditions, this is not a problem since Treasury is free to issue the securities to raise the necessary cash without worrying about when the trust fund securities will be redeemed. To ensure that these benefits would be paid on time, on February 8, 1996, the Congress provided Treasury with the authority (Public Law 104-103) to issue securities in an amount equal to the March 1996 Social Security payments. Further, this statute provided that the securities issued under its provisions were not to be counted against the debt ceiling until March 15, 1996, which was later extended to March 30, 1996. The special legislation did not create any long-term borrowing authority for Treasury since it only allowed Treasury to issue securities that, in effect, would be redeemed in March 1996. However, it allowed Treasury to raise significant amounts of cash. This occurred because March 15, 1996—the date initially established in the special legislation for which this debt would be counted against the debt ceiling—was later than the date that most of the securities would have been redeemed from the trust fund under normal procedures. On February 23, 1996, Treasury issued these securities. Following normal redemption policies, Treasury redeemed about $29 billion of Treasury securities from the Social Security fund for the March benefit payments. Since the majority of the Social Security fund payments are made at the beginning of the month, by March 7, 1996, Treasury had redeemed about $28.3 billion of the trust fund’s Treasury securities. This lowered the amount of debt subject to the limit, and Treasury was able to issue securities to the public for cash or invest trust funds receipts—as long as they were issued before March 15, 1996. Therefore, Treasury could raise an additional $28.3 billion in cash because of the difference in timing between when the securities could be issued (March 15, 1996) and when they were redeemed to pay fund benefits and expenses. According to Treasury officials, during the 1995-1996 debt ceiling crisis, this flexibility allowed Treasury to raise about $12 billion of cash. The remaining capacity was used to invest trust fund receipts. According to Treasury officials, this was the first time that Treasury had been provided with this kind of authority during a debt ceiling crisis. Providing this legislation was important because during a debt ceiling crisis, Treasury may not be free to issue securities in advance to raise the necessary cash. Without this legislation, Treasury would have had at least the following three choices, of which only the first would have been practical. • Trust fund securities could have been redeemed earlier than normal. This action was used in the 1985 debt ceiling crisis to make benefit payments for the Social Security and Civil Service funds. In exercising this option, securities could have been redeemed on the same day that a like amount of securities were issued to the public for cash; these issues would have had no effect on the amount of debt subject to the debt ceiling. However, since the securities would have been redeemed earlier than normal, the trust fund would have lost interest income. In the case of the Social Security funds, such a loss could not be restored without special legislation. • The government could have not paid the benefits. This option would have resulted in the government not meeting an obligation, which it has never done. • Treasury could have issued additional securities, which would have caused the debt ceiling to be exceeded, in violation of the law, and raised legal issues concerning the validity of the securities as obligations of the United States. According to Treasury officials, Treasury has never issued securities that would cause the debt ceiling to be exceeded. We reviewed Treasury reports and confirmed that, at least since July 1, 1954, that this statement was correct. On March 12, 1996, the Congress enacted Public Law 104-115 which exempted government trust fund investments and reinvestments from the debt ceiling until March 30, 1996. Under the authority provided by this statute, between March 13 and March 29, 1996, Treasury issued about $58.2 billion in Treasury securities to government trust funds as investments of their receipts or reinvestments of their maturing securities. In addition, using its normal redemption policies, Treasury redeemed significant amounts of Treasury securities, which counted against the debt ceiling, held by various government trust funds to pay for benefits and expenses. Thus, Treasury was provided the ability to raise significant amounts of cash because these actions reduced the amount of public debt subject to the debt ceiling. To designate government trust fund investments that were not considered subject to the debt ceiling, Treasury issued special Treasury securities. This enabled Treasury, at the time a trust fund redemption was made, to identify whether the redemption lowered the amount of outstanding debt subject to the debt ceiling. For example, on March 12, 1996, the Civil Service fund invested about $100 million in Treasury securities that were counted against the debt ceiling and on March 14, 1996, invested about $184 million in Treasury securities that were exempt. Therefore, if on March 19, 1996, using normal procedures, Treasury redeemed the trust fund’s Treasury securities to pay for benefits and expenses, it would know whether, or how much of, the redemption reduced outstanding securities subject to the debt ceiling. A similar determination could also be made for securities that were reinvested. For example, on March 12, 1996, the Postal Service fund had about $1.2 billion in maturing securities that were subject to the debt ceiling. These funds were reinvested in securities that matured the next business day and were not subject to the debt ceiling. As a result, the amount of debt subject to the debt ceiling decreased by this amount, thus enabling Treasury to issue additional securities to the public for cash. On March 14, 1996, this reinvestment matured and was again reinvested. This transaction did not change the amount of securities subject to the debt ceiling because the maturing securities did not count against the debt ceiling when they were issued. During the 1995-1996 debt ceiling crisis, Treasury acted in accordance with statutory authorities when it (1) suspended some investments of the G-Fund, (2) exercised its discretion in not reinvesting some of the Exchange Stabilization Fund’s maturing Treasury securities, and (3) issued certain Treasury securities to government trust funds without counting them toward the debt ceiling. During the 1995-1996 debt ceiling crisis, Treasury did not exceed the $4.9 trillion debt ceiling limitation established in August 1993. However, Treasury’s actions during the crisis resulted in the government incurring about $138.9 billion in additional debt that would normally have been considered as subject to the debt ceiling. Several of Treasury’s actions during the debt ceiling crisis also resulted in interest losses to certain government trust funds. Our analysis showed that, because of several of the actions discussed in chapters 4 and 5, the government incurred about $138.9 billion in debt that Treasury would have normally included in calculating debt subject to the debt ceiling. The methods of financing this additional debt are presented in table 6.1. It was necessary for Treasury to issue debt to raise the funds necessary to honor authorized government obligations. Consequently, actions by the Congress and Treasury during the 1995-1996 debt ceiling crisis allowed Treasury to avoid defaulting on government obligations while staying under the debt ceiling. On March 29, 1996, legislation was enacted to raise the debt ceiling to $5.5 trillion, which ended the debt ceiling crisis. The legislation enabled Treasury to resume its normal issuance and redemption of trust fund securities and, where statutorily allowed, to begin restoring the interest losses government trust funds incurred during the debt ceiling crisis. Passage of this legislation was inevitable; without it, the federal government’s ability to operate was jeopardized. The level of the public debt is determined by the government’s prior spending and revenue decisions along with the performance of the economy. In 1979, we reported that debt ceiling increases were needed simply to allow borrowing adequate to finance deficit budgets which had already been approved. The Civil Service fund incurred $995 million in interest losses during the 1995-1996 debt ceiling crisis. In 5 U.S.C. 8348, the Congress recognized that the Civil Service fund would be adversely affected if Treasury exercised its authority to redeem Treasury securities earlier than normal or failed to promptly invest trust fund receipts. To ensure that the fund would not have long-term losses, the Congress provided Treasury with the authority to restore such losses once a debt ceiling crisis was resolved. Under this statute, Treasury took the following actions once the debt ceiling crisis had ended. • Treasury reinvested about $46 billion in Treasury securities which had the same interest rates and maturities as those redeemed during November 1995 and February 1996. We verified that, after this transaction, the Civil Service fund’s investment portfolio was, in effect, the same as it would have been had Treasury not redeemed these securities early. • Treasury issued about $250.2 million in Treasury securities to cover the interest that would have been earned through December 31, 1995, on the securities that were redeemed in November 1995. Treasury issued these securities to replace securities that would otherwise have been issued to the fund if normal investment polices had been followed. • Treasury issued about $33.7 million in Treasury securities associated with the benefit payments made from the Civil Service fund’s uninvested cash balances from January 1996 through March 29, 1996. We verified that, in completing this transaction, Treasury calculated the amount of securities that would have been contained in the Civil Service fund’s portfolio had normal investment and redemption policies been followed. Also, between December 31, 1995, and March 29, 1996, the Civil Service fund’s Treasury securities that were redeemed early did not earn about $711 million in interest, as required by law. Treasury restored this lost interest on June 30, 1996, when the semiannual interest payment for these securities would have been paid if normal procedures had been followed. Between November 15, 1995, and March 29, 1996, the G-Fund lost about $255 million in interest because its excess funds were not fully invested. As discussed in chapter 5, the amount of funds invested for the G-Fund fluctuated daily during the debt ceiling crisis, with the investment of some funds being suspended. In 5 U.S.C. 8438(g) the Congress recognized that the G-Fund would be adversely affected if Treasury exercised its authority to suspend G-Fund investments. To ensure that the Fund would not have long-term losses, the Congress provided Treasury with the authority to restore such losses once a debt ceiling crisis was resolved. When the debt ceiling was raised, Treasury restored the lost interest on the G-Fund’s uninvested funds. Consequently, the G-Fund was fully compensated for its interest losses during the 1995-1996 debt ceiling crisis. During the 1995-1996 debt ceiling crisis, the Exchange Stabilization Fund lost about $1.2 million in interest. As discussed in chapter 5, these losses occurred because Treasury, to avoid exceeding the debt ceiling, did not reinvest some of the maturing Treasury securities held by the Exchange Stabilization Fund. Treasury officials said that the Fund’s losses could not be restored without special legislation authorizing Treasury to do so. They said further that such legislation was not provided during the 1995-1996 debt ceiling crisis. Consequently, without specific legal authority, Treasury cannot restore the Exchange Stabilization Fund’s losses. As of August 1, 1996, Treasury had no plans to seek such statutory authority. During the 1995-1996 debt ceiling crisis, the federal government’s debt increased substantially. Under normal procedures, this debt would have been considered in calculating whether the government was within the debt ceiling. Regarding restoration of the Civil Service fund, Treasury restored the securities that would have been issued had a debt issuance suspension period not occurred and the interest losses. Treasury’s restoration actions will eliminate any long-term losses to the Civil Service fund. Also, Treasury restored the G-Fund’s interest losses, ensuring that the G-Fund will not incur any long-term adverse affects from Treasury’s actions. Regarding the Exchange Stabilization Fund, Treasury cannot restore the $1.2 million in interest losses resulting from the Secretary’s decision not to reinvest the Fund’s maturing Treasury securities without special statutory authority.
Pursuant to a congressional request, GAO reviewed the Department of the Treasury's actions during the 1995-1996 debt ceiling crisis, focusing on: (1) investments and redemptions in federal trust funds; and (2) the Treasury's restoration of fund losses. GAO found that: (1) during the 1995-1996 debt ceiling crisis, Treasury followed normal investment and redemption procedures for 12 of the 15 major government trust funds; (2) Treasury suspended normal investments and redemptions for the Civil Service Retirement and Disability Trust, Government Securities Investment (G-fund), and Exchange Stabilization Funds and took other actions to stay within the debt ceiling; (3) these actions were proper and consistent with the Secretary of the Treasury's legal authority; (4) as required, the Secretary of the Treasury determined in November 1995 that a debt issuance suspension period existed prior to exercising his authority; (5) Treasury redeemed $46 billion in Civil Service fund securities in November 1995 and February 1996 and suspended investment of $14 billion in fund receipts in December 1995; (6) Treasury exchanged about $8.6 billion in Civil Service fund securities for Federal Financing Bank (FFB) securities, which FFB then used to repay borrowings from the Treasury; (7) Treasury suspended some investments and reinvestments of G-fund receipts and maturing securities during the crisis; (8) on several occasions, Treasury did not reinvest some of the maturing securities held by the Exchange Stabilization Fund; (9) in March 1996, Treasury issued some securities that were temporarily exempt from the debt ceiling, which allowed it to pay $29 billion in social security benefits and invest $58.2 billion in fund receipts and maturing securities; (10) although the Treasury did not technically exceed the debt ceiling during the crisis, the government incurred about $138.9 billion in additional debt that normally would have been subject to the ceiling; (11) several Treasury actions resulted in interest losses to certain government trust funds; and (12) Congress raised the debt ceiling to $5.5 billion at the end of March 1996, and Treasury fully restored the Civil Service fund's and the G-fund's interest losses by June 1996, but it could not restore the Stabilization fund's interest loss without special legislation.
Federal law has required broadcasters to identify sponsors of radio program content since 1927.requirements have been amended to expand and further define the requirements. Since that time, sponsorship identification In 1934, the Communications Act established FCC and gave it authority to administer sponsorship identification requirements for broadcasters, among other responsibilities, as well as creating additional sponsorship identification requirements. In 1944, FCC adopted rules implementing the sponsorship identification requirements created in 1934. These rules, which required a full and fair disclosure of the sponsor’s identity, remain largely unchanged. In 1960, the Communications Act was amended to redefine the situations in which broadcasters must identify program sponsors. The need for this change arose due largely to the payola scandals of the late 1950s. The conference report accompanying the 1960 act included 27 case examples that were included to provide guidance on how the committee believed the sponsorship identification requirements should be applied. In 1963, FCC adopted regulations implementing the 1960 amendments and provided 36 sponsorship identification case examples as guidance. In 1969, FCC adopted regulations applying sponsorship identification requirements modeled on the broadcast rules to cablecasting by community antenna television systems (CATV). Cablecasting was defined as “programming distributed on a CATV system which has been originated by the CATV operator or by another entity, exclusive of broadcast signals.” In 1972, FCC updated its regulations to apply sponsorship identification requirements to programming under the exclusive control of cable television operators. In 1975, FCC clarified the meaning of its 1944 requirement for full and fair disclosure of the identity of the sponsor, modifying its regulations to make clear that broadcasters and cablecasters are expected to look beyond the immediate source of payment where they have reason to know (or could have known through the exercise of reasonable diligence) that the purchaser of the advertisement is acting as an agent for another, and to identify the true sponsor. In 1991, FCC adopted rules requiring visual and audible sponsorship identification announcement for televised advertisements concerning political issues. In 1992, FCC amended the requirements for advertisements concerning candidates by removing the requirement for an audible announcement and adopting specific criteria for a visual announcement. consist of letters at least 4 percent of the vertical picture height and that the statement be shown for at least 4 seconds. 7 FCC Rcd. 678 (1992). 7 FCC Rcd. 1616 (1992) (rule is codified at 47 C.F.R. §1212(a)(2)(ii)). There have been no changes to the sponsorship identification statutes or regulations since 1992, although there have been more recent FCC cases affecting their interpretation and application. These sponsorship identification requirements generally apply to different types of advertising. Table 1 shows the range of different types of content that often require a sponsorship announcement, including commercial advertising where the sponsor is not typically apparent and political or controversial issue advertising where an announcement must meet additional requirements. Presently, the Communications Act, as implemented by FCC, requires broadcasters and cablecasters to disclose to their listeners or viewers if content has been aired in exchange for money, services, or other inducement. For commercial content, such as advertisements, embedded advertisements, and video news releases, the announcement must be aired when the content is broadcast and can be either a visual or an audible announcement. Political and controversial content must also have either type of announcement when aired on television, but candidate advertisements have specific visual requirements. In addition, when anyone provides or promises to provide money, services, or other inducement to include programming in a broadcast, that fact must be disclosed to the broadcaster or cablecaster. Both the person providing or promising to provide the money, services, or other benefits and the recipient must make this disclosure so that the station can broadcast the sponsorship identification announcement. The general public and to a lesser extent other sources, such as media public interest groups, file complaints with FCC about violations. Because of the high number of broadcasters and cablecasters, FCC relies on the public as an important source of information about compliance with The public is able to assist in monitoring broadcasters’ requirements. compliance with requirements because, as we will describe later in our finding on requirements, broadcasters are required to maintain a publicly available inspection file. This publicly available file lists pertinent information, such as quarterly issues and programs list, which describes the programs that have provided the station’s most significant treatment of community issues during the preceding 3 months. The public can access the public inspection file to check and monitor broadcasters’ compliance and service. As part of its enforcement process, FCC investigates complaints about potential violations by broadcasters and cable operators of the statutes and rules it administers, including sponsorship identification complaints. Two bureaus within FCC—the Consumer and Governmental Affairs Bureau (CGB) and the Enforcement Bureau—are primarily responsible for developing and implementing procedures for processing complaints, conducting investigations, and taking enforcement actions. As part of its role of responding to consumer complaints and inquiries, CGB initially processes the majority of the complaints FCC receives. To the extent that a complaint is defective, for example, for failing to identify a particular broadcast, including its date, time of day, and the station on which it was aired, CGB will dismiss the complaint as incomplete and advise the complainant accordingly, providing guidance as to what information should be included in a complaint. CGB forwards most complaints deemed complete to the Enforcement Bureau, but some complaints, including political sponsorship identification complaints go to the Media Bureau for further investigation. When FCC discovers violations it can take various enforcement actions dependent on the seriousness of the violation, such as admonishment, monetary forfeiture, or not renewing a broadcaster’s license. 2 U.S.C. § 441d. the advertisements. The disclaimer requirements apply to any kind of communications media carrying public political advertising and are not limited to broadcast or cablecast advertisements. Similar to FCC, FEC initiates most enforcement activity pursuant to complaints received from the general public, including individuals and organizations associated with federal political campaigns. Complaints are filed with the Office of Complaints Examination and Legal Administration (CELA), which reviews the complaint to determine if the complaint states facts, identifies the parties involved, and provides or identifies supporting documentation. If the complaints are deemed sufficient, CELA informs the complainants that they will be notified once the case has been resolved. If FEC discovers violations, enforcement can include negotiated corrective action, including civil penalties, or other Commission initiated legal action seeking judicially imposed civil penalties or other relief. As previously stated, sponsorship identification statutes and regulations require broadcasters and cablecasters to identify commercial content— usually an advertisement, an embedded advertisement, or a video news release (VNR) that has been broadcast in exchange for money or payment in-kind. According to most broadcasters we spoke with, commercial content is fairly straightforward to identify, and they are accustomed to dealing with such content and report that compliance is manageable. For content considered political or discussing a controversial public issue, requirements, enforced by FCC and FEC, are more extensive and require more detailed on-air announcements and tracking of all related communications and agreements in a public file for FCC. According to FCC, commercial advertisements do not always require a sponsorship announcement. FCC does not require an announcement when an obvious connection exists between the content and sponsor, such as with a standard commercial advertisement. For all commercial advertisements where the true sponsor is not apparent, even for infomercials that may be 30 minutes or longer, FCC requires a written or verbal announcement to occur only once during the program. This announcement must fully and fairly disclose the true identity of those who are paying for the advertisement. In addition, whenever an individual, such as a deejay, receives money or payment in-kind for airing specific content or favorably discussing a specific product a sponsorship announcement must be made. Therefore, station employees must disclose such transactions to the station. Thus, if a record company or its agent pays a deejay to play records or talk favorably about an artist or record on the air, and he does so, the result is considered sponsored content. FCC guidance directs that the deejay must reveal the payment to the broadcaster and a sponsorship announcement must be made when the content goes on-the-air. According to broadcasters, industry trade groups, and others we spoke with, compliance with these standards and requirements is not a problem because the requirements are part of their standard review process. Embedded advertisements—typically, when a commercial product is provided without charge to use in entertainment programming—may not require a sponsorship announcement if they are reasonably related to the show. Since many consumers now record shows or change the channel during commercials, broadcasters have increased their use of embedded advertising. However, a sponsorship announcement is not required every time a product appears in a program. For example, FCC’s guidance describes scenarios in which a manufacturer provides a car to a television show for a detective to chase and capture the villain, and states that the use of the car alone would not require a sponsorship announcement. In this scenario the use of the car could be considered “reasonably related” to its use in the show. However, in the same scenario, if a character also made a promotional statement about the specific brand—such as its being fast—FCC requires a written or verbal sponsorship announcement sometime during the program. According to FCC’s guidance, in this second scenario, the specific mention of the brand may go beyond what would be “reasonably related” to its use in the show. The reasonably related standard requires some assessment of the content. Broadcasters told us they have processes in place to review content on their networks to determine if a program needs a sponsorship announcement for an embedded advertisement. VNRs are another type of commercial content which may require a sponsorship announcement. VNRs are pre-packaged news stories that may include only film footage or may also include suggested scripts. However, broadcasters do not always use a VNR in its entirety but may use portions of the video. For example, if a news story about car manufacturing could benefit by showing video from inside a manufacturing plant, a broadcaster may use footage from a VNR because it cannot easily access the interior of a plant during its operations. According to FCC, it requires broadcasters to air a sponsorship announcement when using VNRs, even when they are provided free of charge, under the same circumstances as it would require a sponsorship identification for other programming. When a film or a story was provided by a third party and it conveys value to the station, it must have either a verbal or written sponsorship announcement, if it is furnished in consideration for identification of a product or brand name beyond what is reasonable related to the broadcast.FCC issued in its 2005 Public Notice concerning the use of VNRs. In that Public Notice, FCC reminded broadcasters of their obligation to comply with sponsorship identification requirements when using VNRs and that there must be an announcement to the audience about the source and This is an update to the guidance sponsorship of the VNR.with FCC’s position and believe it should treat VNRs similar to press releases and not require a sponsorship announcement if a broadcaster did not pay for the material and uses only a portion of its content. For example, in the previous illustration, a broadcaster may use VNR footage, because it does not have access to the interior of a car manufacturing plant. In such instances, FCC requires broadcasters to make an announcement that could appear on the screen during the footage stating, “Footage furnished by ABC Motor Company” or as a verbal or written announcement at the end of the program. Broadcasters we spoke with had differing opinions on whether to use VNRs. While one broadcaster believes it should be up to the news program to determine if an announcement is needed, others we spoke with were divided about whether to use VNRs with a sponsorship announcement or to never use VNRs at all. Some broadcasters and others reported the use of VNRs has been increasing, in part, because of tighter news budgets and the need to fill airtime. In recent years, instances of VNR use without proper credit have been reported and investigated by FCC; we will discuss these instances later in our report. Nevertheless, some broadcasters disagree Most broadcasters we spoke with indicated the sponsorship identification requirements are generally manageable as part of their review processes. As previously indicated, broadcasters told us they have processes in place to review the different types of advertisements and other programming aired. These reviews check to ensure the advertisement or programming meets FCC requirements, including the sponsorship identification requirements. Since it is part of the standard content review process and the sponsorship identification requirements have not changed for many years, broadcasters told us the requirements were not difficult to meet. Political content and content discussing a controversial issue of public importance are subject to all requirements that commercial content must follow but have additional on-air and public file sponsorship identification requirements. Advertisements that contain political content or involve public discussion of a controversial issue must include a sponsorship announcement at the beginning or end of the program. If the program is longer than 5 minutes, the announcement must be made at both the beginning and end of the program. For broadcast television and political content discussing a candidate, a visual announcement must be made that is at least equal to 4 percent of the screen height must exist and lasts 4 seconds. For broadcast radio, there must be an audible announcement. 47 C.F.R. §§ 73.1212(e) (for broadcast material), 76.1701(e) (for cablecast material). post a majority of their public file on this website by February 2013, making the file more easily accessible to the general public. According to FEC, paid political communications supporting or opposing candidates for election to federal office, which include radio and television advertisements, are required to contain what are called “disclaimer statements.”statutes and regulations. Television and radio political advertisements authorized and paid for by a federal candidate or his or her campaign committee or another organization must include a disclaimer spoken by the candidate identifying him or herself and stating that he or she approved the advertisement, as well as a statement identifying the organization that paid for the advertisement. Television advertisements must either show the candidate making the disclaimer statement or show a clearly identifiable image of the candidate during the statement. They must also include a clearly readable written statement similar to the verbal statement that appears at the end of the advertisement for at least 4 seconds. Certain advertisements not approved by a candidate or his or her committee must identify who paid for the advertisement, state that it was not authorized by any candidate or candidate’s committee, and list the permanent street address, telephone number, or World Wide Web address of the person who paid for the communication. In addition to monitoring compliance with these disclaimer requirements, FEC serves as a repository for campaign finance data for candidates for political office. Just as stations licensed by FCC are required to preserve records to establish that they are meeting their responsibility, among others, to treat political candidates running for the same public office equally, FEC oversees requirements to report campaign funding and expenditures. Individuals, political committees, and other organizations supporting or opposing candidates for federal office are required to report campaign funding and expenditures for certain activities, which can include payments made for purchasing political advertising and information on the funds they receive for advertisements. This reporting is done to assure that money is being collected and spent in accordance with federal election campaign law. The reporting requirements vary according to the purpose of the advertisement, who paid for the advertisement, and whether it is the product of a coordinated effort between political committees and others. The political committees are always subject to the reporting requirements and must submit itemized reports to FEC showing all of their expenditures, including advertising. Political committees must also submit itemized reports to FEC showing contributions received, including contribution amounts, dates, and contributor names and addresses. FEC has also required organizations and individuals to report expenditures for political advertisements and donations of $1,000 or more made for certain political advertisements, called “electioneering communications,” which are related to elections for federal office and broadcasts within specified time frames before the elections. As described in FEC’s guidance, FEC administers specific reporting requirements for those making electioneering communications, which can include political advertisements on television or radio. Specifically, electioneering communications refers to any broadcast, cable, or satellite communication that refers to a candidate for federal office and is distributed within specific time frames before a federal general election or Political advertisements that do not meet these federal primary election. specifications are not subject to these reporting requirements. Once payments for electioneering communications, including television or radio advertisements, exceeds $10,000 in any calendar year, those responsible for them must report payments and the sources of funds used to FEC within 24 hours of each broadcast.things, identify: Each report must, among other the person or organization that made the payments, including their principal place of business; any person sharing or exercising direction or control over the activities of the person who made the payments; the amount of each payment in excess of $200, the payment dates, and the payee; all candidates referred to in the advertisements and the elections in which they are candidates; and the name and address of each person who donated $1,000 or more since the first day of the preceding calendar year to those responsible for the advertisement. According to FEC, “coordinated communications” are contributions for communications that are coordinated with a candidate or party committee. These contributions are subject to amount limitations, source prohibitions, and reporting requirements under the Federal Election Campaign Act. coordinated, also known as independent expenditures, have requirements to report itemized payments for the advertisements that are triggered if they expressly advocate the election or defeat of a clearly identified candidate. While independent expenditures are not subject to spending limitations, they are subject to the reporting requirements. Those responsible for the payments must report: Expenditures for communications that are not a statement indicating whether each payment was in support of, or in their names, mailing addresses, occupations, and employers; the name and mailing address of the person to whom payments were made; the amount, date, and purpose of each payment; opposition to, a candidate, together with the candidate’s name and office sought; the identification of each person who made a contribution in excess of $200 for the purpose of making a coordinated advertisement; and a verified certification as to whether such expenditure was made in cooperation, consultation, or concert with, or at the request or suggestion of a candidate, a candidate’s authorized committee, or its agents, or a political party committee or its agents. Corporations, labor unions, individuals and businesses with federal government contracts, foreign citizens, and qualified non-profit corporations are prohibited from coordinating these advertisements. FCC provides guidance on meeting sponsorship identification requirements, which broadcasters we spoke with generally report to be helpful. Broadcasters we spoke with told us they apply the sponsorship identification requirements many times a day during reviews of programs and when preparing news programs. This process involves network employees viewing all content, and if a program appears to focus too much on a product, then the broadcaster will ask the producer of the content for a sponsorship announcement. Broadcasters we spoke with indicated they tend to be cautious in reviewing and identifying sponsored content in order to avoid enforcement actions. The guidance issued in FCC’s 1963 report and order has remained substantially unchanged, and while many broadcasters indicate the guidance is still useful, it addresses issues with older technology that may no longer be relevant and does not discuss how the rules apply to newer technologies. Specifically, one case example discusses a broadcaster’s use of a kinescope recording of a congressional hearing provided by a third party. The guidance states that “expensive kinescope prints dealing with controversial issues are being paid for by someone,” and therefore the broadcaster should determine who and make a sponsorship announcement. While this case example provides guidance for the use of content discussing a controversial issue, which clearly falls under the sponsorship identification requirements, the cost of creating such content is less of an issue today. We have previously reported on the benefits of revisiting provisions of regulatory programs to determine if changes might be needed to better achieve the program’s goals. Current technologies, such as digital video equipment and the Internet, allow similar content to be created and distributed and it is often publicly available at no cost and the rules are not clear how the rules apply in these situations. FCC officials told us the agency has not updated the guidance because there has been no need to update it. Rather, FCC officials said they have relied on case law and public notices, among other devices, to provide guidance to broadcasters and cablecasters. However, some broadcasters indicated that FCC could clarify how the guidance applies in specific situations, such as when a VNR or product is used but the broadcaster was not paid. In its Public Notice issued in 2005, FCC reminded broadcasters and cablecasters of their sponsorship identification obligations and indicated that VNRs generally need a sponsorship announcement. According to FCC’s enforcement reports, its enforcement actions against broadcasters’ and cablecasters’ use of VNRs have involved cases where the footage and script of a VNR focused too much on a specific brand or product, beyond what was reasonably related to the story. FCC told us these cases indicate that VNRs should be following the same rules regarding sponsorship identification as other programming. However, some stakeholders argue VNRs are similar to press releases because they are often provided for with no money or payment in-kind including no understanding or agreement that they will be broadcast on-the-air. According to FCC guidance, a press release does not need a sponsorship announcement. Some broadcasters indicated they remain unsure of when there needs to be a sponsorship announcement as part of a VNR. As a result, FCC’s interpretation and the broadcasters’ interpretation of how the requirements apply to VNRs remain vastly different, in-part because no payment is made to the broadcaster to air the VNR. The Public Notice in 2005 sought comment on a number of issues related to the use of VNRs and also indicated FCC intends to issue a report or initiate a formal proceeding based on the comments received. As of the issuance of this report, however, FCC has taken no further action in response to the comments received. FCC’s investigation and enforcement process generally begins with a complaint to the agency that will be reviewed by the CGB and may be forwarded to the Enforcement Bureau if the complaint is complete. As previously indicated, FCC receives complaints primarily through the CGB. Since 2003, FCC has received over 200,000 complaints of all types annually through CGB, some of which it dismisses as incomplete. Others, including sponsorship identification complaints deemed to be complete are forwarded to the Enforcement Bureau for possible investigation and enforcement. Complaints involving non-political sponsorship identification issues are forwarded to the Enforcement Bureau; but complaints raising political sponsorship identification issues go to the Media Bureau. When the Enforcement Bureau receives a complaint, the bureau conducts several reviews prior to being classified as a sponsorship identification complaint and prior to contacting the broadcaster named in the complaint, as shown in figure 1. First, if a complaint is related to an alleged sponsorship identification violation, the complaint goes to the Investigations and Hearings Division where a manager conducts a review of the complaint. If the manager determines the subject of the complaint to be sponsorship identification related or related to another topic handled by the Investigations and Hearings Division, then the complaint is assigned to an attorney. The attorney enters it into the database at which time it is considered a case and can be classified as related to sponsorship identification. A case may be linked to numerous complaints, or a case may be opened even though FCC received no complaints. For example, in 2007, FCC received over 18,000 complaints and opened 3 sponsorship identification cases in response to a single incident wherein a nationally syndicated radio and television host discussed the “No Child Left Behind” program and did not disclose that the discussion was sponsored. As shown in table 2, according to FCC, it opened 369 sponsorship identification cases from the beginning of 2000 through 2011, representing just over 1 percent of the total cases the Investigation and Hearings Division opened during that time period. According to FCC officials, after opening a case, the attorney conducts a more substantive review to determine if the allegations in the complaint, if true, would constitute a possible violation of any statutes or regulations. If warranted, the attorney may contact the complainant for more information. If it is determined that no violation occurred or can be supported, the case may be closed following the substantive review, by making a note in the case file. If this substantive review determines a violation may have occurred, then FCC will send a letter of inquiry to the broadcaster named in the complaint initiating an in-depth investigation. As shown in table 2, 101 cases were closed following a substantive review and prior to a letter of inquiry being sent. If a case proceeds to a letter of inquiry being sent, the letter serves as a notification to the broadcaster named in the complaint that it is under investigation. The letter of inquiry requests specific information concerning the complaint, such as a video of the broadcast. As shown in table 2, from 2000 to 2011, FCC sent 242 letters of inquiry for sponsorship identification cases. Next, FCC reviews the information provided by the broadcaster named in the complaint in response to the letter of inquiry. If FCC determines that a violation of the sponsorship identification requirements did not occur or cannot be proven, it can close the case. As with the process of closing a case prior to sending a letter of inquiry, FCC closes the case by making a note in the case file but typically does not inform the broadcaster named in the complaint. From 2000 through 2011, FCC reported it closed 195 cases with no enforcement action following a letter of inquiry. In other cases, following the letter of inquiry, FCC may determine that a violation occurred and seek an enforcement action. Since 2000, FCC has issued enforcement actions in 22 sponsorship identification cases with varying types of violations and enforcement actions. For example, in 2011 FCC issued a notice of apparent liability to KMSP-TV in Minnesota, for airing a VNR without identifying the sponsor of the release. KMSP-TV was fined $4,000. In 2007, FCC issued a notice of apparent liability to Sonshine Family television for airing five different episodes of programs on ten separate occasions, during which an individual discussed the federal “No Child Left Behind” program. Sonshine was fined $40,000 because the station failed to identify the sponsor of the content. Since 2000, FCC has also agreed to 10 consent decrees with different broadcasters that include the broadcaster’s adopting a plan for compliance and making a voluntary contribution to the United States Treasury. The voluntary payments to the Treasury have varied in amounts, from as little as $12,000 for a pay-for-play incident to as much as $4 million a separate but similar incident. While most complaints do not end with an enforcement action, FCC generally does not communicate with the broadcaster named in the complaint when it closes sponsorship identification investigations. As previously indicated, the letter of inquiry notifies the broadcaster named in the complaint that an investigation is under way but following that communication FCC does not provide any information on the investigation unless the case results in an enforcement action. GAO has previously reported that FCC enforcement actions can help correct identified compliance problems and deter future noncompliance.Similarly, informing a broadcaster under investigation that a matter has been closed could help inform broadcasters about compliant activities. Furthermore, while not specifically related to sponsorship identification issues, in an effort to promote open government and public participation, the administration has developed a plan to increase openness in the government. The plan includes an initiative to enhance enforcement of regulations through further disclosure of compliance information. This builds on previous guidance to use increased disclosure to provide relevant information to help make decisions. It further directs agencies to develop plans for providing greater transparency about their enforcement activities and for making such information available online. However, broadcasters we spoke with confirmed that FCC does not inform them of the status of investigations, and some indicated they currently do not know the status of several investigations. They reported the lack of information about cases and FCC decisions creates uncertainty about the propriety of their past actions. In addition, this practice of not informing broadcasters about the results of investigations does not align with the administration’s goals to disclose compliance information to help regulated entities make decisions. As a result, broadcasters might not have sufficient information to determine whether they should modify their practices. This could result in stations unnecessarily editing content because of unwritten regulatory policy or what they assume the policy to be. According to FCC officials, they do not communicate with the broadcaster named in the complaint because, among other reasons, FCC has no legal obligation to do so. In addition, FCC officials identified several other factors as to why it does not communicate with the broadcaster named in the complaint. First, FCC officials told us it does not want to inform the broadcaster named in the complaint a case was closed because it may want to reopen the case if new evidence presents itself. Second, officials also said that FCC does not want closing a case to be inaccurately interpreted as an endorsement of the action being investigated even if the investigation does not result in a finding of a violation. Finally, officials indicated informing the broadcaster named in the complaint about closure of an investigation would require crafting a letter tailored to fit the unique set of facts and requirements for each case. This would be resource intensive, and according to FCC officials, FCC does not have sufficient resources to implement such practices. FCC sponsorship identification investigations can be lengthy, according to FCC data, taking from 10 months to over 5 years to complete. As shown in table 3, the shortest time period for resolution of a case with an enforcement action, was 10 months. The process to negotiate a consent decree takes longer because it often involves complex negotiations between FCC and a broadcaster. Even when the investigation sends a letter of inquiry and results in no enforcement action, according to FCC officials, the median length of time to close investigations was 38 months for approximately 200 cases. In 2011, FCC set a performance goal to resolve 90 percent of sponsorship identification cases within 15 months. According to FCC officials, FCC missed its goal by a few days although officials could not provide data to support this. Specific goals about timeliness of investigations provide better service for regulated entities, but in 2012 and 2013 FCC removed this goal. As previously stated, paid political advertisements require disclaimer statements and FEC’s enforcement process begins typically with a complaint to CELA. FEC receives most disclaimer complaints from the public. As shown in figure 2, complaints proceed through several procedural steps, including a review and vote by the FEC commissioners. CELA reviews the complaint for compliance with required criteria, including ensuring it identifies the parties involved, involves a violation of campaign finance law, and provides supporting documentation. If a complaint does not meet these criteria, CELA notifies the complainant of the deficiencies and informs them that no action can be taken pursuant to the complaint unless those deficiencies are resolved. If the complaint meets the criteria, CELA informs the complainant that they will be notified when the matter has been resolved. From 2000 through May 15, 2012, FEC opened 301 cases based on complaints alleging violations of disclaimer statement requirements. The cases were based on complaints alleging violations of the disclaimer requirements for advertisements using various media, including television and radio, letters, and billboards. For example, in 2006, a complaint alleged a television advertisement for a congressional candidate in Ohio failed to include an oral statement that identifies the candidate and states that the candidate has approved the communication. Less than 17 percent of the complaints alleging violations of disclaimer statement requirements involved television or radio disclaimer requirements. Prior to taking any action other than dismissing a complaint, FEC provides the entity named in the complaint at least 15 days to respond and demonstrate that no violation occurred. After the response period, FEC’s Office of General Counsel evaluates the complaint and response and may refer the case to the Alternative Dispute Resolution Office. This office provides solutions for settling cases in lieu of litigation and allows FEC to settle the case early in the enforcement process. While alternative dispute resolution avoids the litigation process, the entity named in the complaint must commit to terms for participation in an alternative dispute resolution, which include setting aside the statute of limitations and participating in negotiations to settle the case, among other conditions. Alternative dispute resolution settlements generally require entities named in the complaints to take corrective action, such as hiring compliance specialists, designating persons as responsible for disclosure, and attending educational conferences. Generally, FEC does not refer cases for alternative dispute resolution that are highly complex but does refer cases that could include incidents where FEC believes there was knowing and willful intent to commit violations or potential violations in areas that FEC has set as priorities. For cases not recommended for alternative dispute resolution, FEC Commissioners vote before an investigation is initiated. The Federal Election Campaign Act requires that FEC find reason to believe that a person has committed, or is about to commit, a violation as a precondition to opening an investigation into an alleged violation. Should the Commissioners’ vote to find reason to believe a violation occurred, FEC and the alleged violator can negotiate a conciliation agreement that can include a monetary penalty or corrective action. If the Commission needs additional information prior to settling a case using a conciliation agreement, the Enforcement Division conducts an investigation. Violations not resolved with a conciliation agreement can result in the Commission filing suit against the respondents. Our review of FEC data found the disclaimer cases resulted in 330 case outcomes that range from dismissals to civil penalties through conciliation agreements. However, as shown in table 4, of the 38 outcomes that could have ended with a civil penalty—conciliation agreement, alternative dispute resolution agreement, and lawsuit—FEC assessed civil penalties in only 29 cases, 7 of which were related to television or radio disclaimers. Unlike FCC, FEC provides status updates to those involved in investigations and issues reports explaining investigation findings. On December 31, 2009, FEC issued guidelines for tracking and reporting the status and time frames of complaint responses, investigations, and enforcement actions. The guidelines require the FEC’s Office of General Counsel and the Office of Alternative Dispute Resolution to provide the Commissioners and affected parties with a status report once per year for cases in which the Commissioners have not yet voted on the recommendation made by the General Counsel or the Office of Alternative Dispute Resolution based on their initial reviews. These status reports include estimate of when the Commissioners will vote. Also unlike FCC, FEC issues reports explaining its resolution of enforcement cases, including case dismissals. These reports can clarify acceptable and unacceptable practices for the regulated community. For example, during 2007, FEC received a complaint alleging that a candidate had violated television advertisement disclaimer requirements by including an improper disclaimer in the advertisement. The complaint alleged that the printed disclaimer violated the requirements because it did not also state that the candidate approved the advertisement. FEC dismissed the case in an exercise of its prosecutorial discretion to not pursue a violation in part because of partial compliance with disclaimer requirements. In doing so, FEC observed that the verbal disclaimer identified the candidate and informed the public of the candidate’s approval of the advertisement and the printed disclaimer informed the public that the candidate’s committee paid for the advertisement. FCC receives hundreds of thousands of complaints related to all areas it regulates but there have only been a small number of sponsorship identification cases. Of the sponsorship identification cases opened by FCC, only a handful have resulted in enforcement actions against broadcasters, and many of those enforcement actions were for fines of less than $100,000. Most broadcasters told us they generally have no problems meeting the sponsorship identification requirements because they have processes in place to review all content and ensure it has a sponsorship announcement if needed. However, FCC guidance for the sponsorship identification requirements has not been updated in nearly 50 years to address more modern technologies and applications. We have previously reported that retrospective reviews of regulations can change behavior of regulated entities. Similarly, a review and update of FCC guidance that discusses outdated technologies could result in changes in behavior. One example discusses a broadcaster’s use of expensive kinescope prints as part of a story on a controversial issue. The example directs such a use should receive a sponsorship announcement because of the controversial issue being discussed and the cost of the film. Yet, today, because the expense of providing film is no longer relevant, broadcasters may be unsure on whether the concern is the expense of the film or the controversial issues discussed in the film. FCC should clarify its guidance to clearly address how, when content is provided with no money or payment in-kind and it does not discuss a controversial issue, the situation should be treated. Furthermore, FCC should clarify its examples to direct broadcasters’ treatment of content provided with no money or payment in-kind that does not highlight a product or brand beyond the “reasonably related” standard, such as a VNR. FCC has indicated VNRs must have a sponsorship announcement; however, FCC’s enforcement of VNRs has not found fault with the use of the VNR but rather when the VNRs focus on a specific product. Stakeholders disagree on the use of VNRs. FCC’s enforcement actions and guidance do not distinguish how to act when portions of VNRs are used or when a VNR does not disproportionately focus on a product or brand. FCC indicated in 2005 that it would issue a report or take other necessary action regarding this issue and updating the guidance could serve this purpose. Unlike FEC in its enforcement of disclaimer requirements, FCC’s enforcement process for sponsorship identification cases generally does not inform the broadcasters or cablecasters named in the complaint when investigations have been closed. In cases where a letter of inquiry has been sent, the broadcaster or cablecaster must fulfill its responsibility and provide FCC with the requested information. Yet, according to FCC, because they have no legal obligation to inform broadcasters that an investigation has concluded, it typically does not provide that information. By providing this information, for cases in which FCC conducts a full investigation and determines the broadcaster’s actions not to be a violation of requirements, the outcome could provide guidance to the broadcaster of allowable activities. Even in cases where FCC closed a case with no investigation, informing the broadcaster that the case is closed, even if it may be reopened in the future, would support government-wide goals of greater transparency and sound oversight practices. Finally, while in 2011 FCC had specific goals related to the timeliness of completing sponsorship identification investigations, it was unable to provide data supporting how it met those goals, and in subsequent years it withdrew the goals. In an effort to achieve greater openness, the timeliness of reporting and publishing information has been identified as an essential component. By re-establishing goals about completing sponsorship identification investigations in a timely manner, FCC would support broader government goals of completing actions in a timely manner to better serve its constituencies and regulated entities. We recommend that the Chairman of the FCC take the following three actions: To provide clarity on how sponsorship identification requirements apply to activities not directly addressed by FCC’s current guidance, such as the use of video news releases, and to update its guidance to reflect current technologies and recent FCC decisions about video news releases, FCC should initiate a process to update its sponsorship identification guidance and consider providing additional examples relevant to more modern practices. To improve its transparency concerning which investigations are ongoing or have been concluded and to provide guidance on allowable activities, FCC should communicate the closure of all sponsorship identification investigations with the broadcaster named in the complaint after a letter of inquiry was sent. The letter should indicate the case has been closed, but in doing so, FCC could note that closing the case does not signify an endorsement of the actions that were being investigated and that the case could be reopened. To improve timeliness of investigations and ensure, when possible, that investigations are completed in an expeditious manner, FCC should develop goals for completing sponsorship identification cases within a specific time frame and develop a means to measure and report on how well it meets those goals. We provided a draft of our report to FCC and FEC for review and comment. FCC provided comments in a letter dated January 23, 2013, that is reprinted in appendix II. Overall, FCC indicated that it will consider our recommendations and how to address the concerns discussed in our report. In response to our second recommendation—to communicate the closure of investigations with the broadcaster named in the complaint when a letter of inquiry has been sent—FCC identified a number of issues, many of which were cited in our report. Specifically, FCC has concerns that reporting the closing of a case may be misinterpreted as an endorsement of a broadcaster’s actions. FCC further noted its limited number of Enforcement Bureau staff available to work on the large portfolio of cases could and that it could not dedicate the necessary time to craft closing letters tailored to each case. However, we feel that FCC could create a standard letter—stating that a case has been closed, that the closing of the case does not endorse the actions of the broadcaster named in the complaint, and that the case could be reopened because of new evidence. We believe such a standard letter would require minimal resources to create and send, yet would contribute to greater transparency. FCC also noted that it is reluctant to single out sponsorship identification matters for special treatment in terms of closure letters but are also concerned about the even greater impact on resources if closure letters are instituted on a broad basis. However, we believe that this could serve as a pilot program for greater adoption of closure letters for other types of FCC investigations. In response to FCC’s concerns, we updated our recommendation to demonstrate how a closure letter could be worded to indicate the closure did not indicate an endorsement of the actions being investigated and that a case could be reopened. Both FCC and FEC provided technical comments that were incorporated as appropriate. When providing its technical comments, FCC discussed the treatment of VNRs indicating that although the 2005 Public Notice states VNRs generally must have a sponsorship announcement, recent cases involving VNR complaints have resulted in FCC treating VNRs similar to other programming subject to the sponsorship identification requirements. We reflected this change in the report, and added a reference to FCC decisions about VNRs to our first recommendation. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of his letter. At that time, we will send copies of this report to the Chairman of the Federal Communications Commission, and the Chair of the Federal Election Commission. We will also make copies available to other on request. In addition, the report will be available on the GAO Web site at http://www.gao.gov. If you or your staff have any question, please contact me at (202) 512- 2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix III. We were asked to review requirements for identifying the sponsors of broadcast commercial and political advertisements and to determine the extent to which agencies responsible for administering them address complaints alleging violations of the requirements. Specifically, we (1) describe the sponsorship identification requirements and guidance for commercial and political content, the federal election disclaimer requirements, as well as stakeholders’ views of these requirements and guidance and (2) assess how and to what extent FCC and FEC address sponsorship complaints through each agency’s enforcement process. To describe the sponsorship identification requirements and guidance as well as stakeholders views on these requirements and guidance, we reviewed sponsorship identification statutes and regulations, including the Communication Act of 1934, as amended, and the Federal Election Campaign Act. We also reviewed relevant academic literature and interviewed FCC and FEC officials to gather their understanding of the governing statutes and regulations, how they have changed, their purposes, and how they apply to various media platforms, including conventional and embedded advertising and video news releases, and political and controversial advertisements. We conducted interviews with over one-dozen stakeholders selected because they are either subject to sponsorship identification requirements, have monitored broadcasters’ compliance with sponsorship identification requirements, or contributed research to the topic. These stakeholders’ interviews were conducted with representatives from the four major television broadcast networks as well as two additional groups that own television and radio stations. We also interviewed representatives from a consumer watchdog organization, academics, and trade associations that represent broadcasters and news directors and producers. These interviews were conducted to obtain views on the effect of statutes and regulations and possible changes to them. To determine how and to what extent FCC and FEC address sponsorship identification complaints, we interviewed FCC and FEC officials responsible for receiving, processing, and investigating complaints. In addition, we analyzed relevant FCC and FEC documents describing agency methods and processes for identifying violations, receiving sponsorship identification complaints, communicating with the complainant and subject of the complaint, initiating and conducting investigations, and taking enforcement actions. We also analyzed relevant FCC and FEC data to describe sponsorship identification complaints, investigations, and enforcement actions. We analyzed FCC data showing all complaints received by FCC from 2000 through June 2012 to determine the percentage of complaints that were sponsorship identification complaints, FCC actions in response to sponsorship identification complaints, and the time frames for resolving these complaints. We determined the FCC data to be sufficiently reliable for our purposes based on previous analysis of the FCC database and based on current interviews. To determine the extent to which FCC addresses sponsorship identification complaints, we analyzed all FCC enforcement actions pursuant to these complaints from 2000 through June 2012. We also analyzed FEC data showing all complaints received by FEC from 2000 through May 15, 2012, to determine the percentage of complaints that were disclaimer statement complaints, FEC actions in response to disclaimer statement complaints, and the time frames for resolving these complaints. The FEC data were not materially relevant to our findings and so while we asked a series of questions about the internal controls and data reliability, we did not make a final determination of its reliability. To determine the extent to which FEC addresses disclaimer statement complaints, we analyzed all FEC disclaimer statement cases, including cases dismissed by FEC, and all FEC disclaimer statement enforcement actions from 2000 through May 15, 2012. We also analyzed FCC and FEC strategic plans describing the agencies’ respective goals for sponsorship identification and disclaimer statement complaint processing and enforcement. In addition, we analyzed FCC and FEC data and documents describing whether the agencies met their respective goals. We interviewed FCC and FEC officials about their goals and their progress toward achieving them. We conducted this performance audit from March 2012 through January 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Ray Sendejas, Assistant Director; Eric Hudson; Bert Japikse; Michael Mgebroff; Amy Rosewarne; and Andrew Stavisky made key contributions to this report.
The FCC is responsible for ensuring that the public knows when and by whom it is being persuaded. Requirements direct broadcasters to disclose when a group or individual has paid to broadcast commercial or political programming. Political advertising must also comply with requirements overseen by the FEC. Recognition of sponsored programming has become increasingly difficult because of new technologies and increased access to sponsored programming such as video news releases. GAO (1) describes requirements for sponsorship identification and federal election disclaimers and stakeholders' views of the requirements and (2) assesses how and to what extent FCC and FEC address complaints. To conduct the work, GAO reviewed relevant laws, guidance, and enforcement procedures, and interviewed agency officials and stakeholders about enforcement processes and actions. Sponsorship identification statutes and regulations, overseen by the Federal Communications Commission (FCC), require broadcasters to identify commercial content--usually an advertisement, an embedded advertisement, or a video news release--that has been broadcast in exchange for payment or other consideration. A written or verbal sponsorship announcement must be made at least once during any sponsored commercial content except when the sponsor is obvious. For content considered political or that discusses a controversial issue, broadcasters must follow all requirements for commercial content and additional requirements, such as identifying officials associated with the entity paying for an advertisement. In addition, the Federal Election Commission (FEC) enforces federal election law that requires all political communications for a federal election, including television and radio advertisements, to include a disclaimer statement. FEC also oversees requirements to report campaign funding and expenditures, including funding for political advertising. FCC has guidance that helps broadcasters determine when a sponsorship announcement is needed, such as when a deejay receives a payment for airing specific content. While broadcasters consider this guidance useful, it addresses older technology that in some cases is no longer used. Furthermore, some broadcasters indicated that it would be helpful for FCC to clarify how the guidance applies in some situations, such as when a video news release or product is used during programming. According to FCC, it opened 369 sponsorship identification cases representing just over 1 percent of the Investigations and Hearings Division's total cases opened from the beginning of 2000 through 2011. In 22 of these cases, FCC issued enforcement actions with varying types of violations and enforcement actions. While FCC follows standard procedures when addressing complaints, it does not inform the broadcaster named in the complaint of the outcome of the investigation in many cases. Most broadcasters we spoke with confirmed that FCC does not inform them of the status of investigations, and some indicated they currently do not know the status of several investigations. According to FCC, it does not communicate status with broadcasters named in complaints because, among other reasons, it has no legal obligation to do so. Broadcasters reported the lack of information about cases and FCC decisions creates uncertainty about the propriety of their past actions. As a result broadcasters might not have sufficient information to determine whether they should modify their practices. This can result in stations' editing content because of unwritten regulatory policy or what they assume the policy to be. Moreover, these investigations can be lengthy, taking from 10 months to over 5 years to complete when an enforcement action is involved. From 2000 through May 2012, FEC opened 301 cases based on complaints alleging violations of political advertisement disclaimer requirements. FEC assessed civil penalties in 29 cases, 7 of which were related to television or radio advertisement disclaimers. Unlike FCC, FEC provides status updates to those involved in investigations and issues reports explaining investigation findings. FEC also issues reports explaining case dismissals. These reports can clarify acceptable and unacceptable practices for the regulated community. FCC should, among other things, update its sponsorship identification guidance and consider providing additional examples relevant to more modern issues and communicate the resolution of an investigation to the target of the investigation when a letter of inquiry has been sent, and develop goals for resolving all sponsorship identification cases within a specified time frame. GAO provided FCC and FEC with a draft of this report. FCC indicated it will consider the recommended actions and how to address the concerns discussed in the report. Both FCC and FEC provided technical comments.
USPS has a universal service obligation, part of which requires it to provide access to retail services. It is required to serve the public and provide a maximum degree of effective and regular postal services to rural areas, communities, and small towns where post offices are not self- sustaining. USPS is intended to be a financially self-sufficient entity that covers its expenses almost entirely through postal revenues. USPS receives virtually no annual appropriations but instead generates revenue through selling postage and other postal products and services. Retail alternatives are increasingly important to USPS; revenues from all retail alternatives—including self-service kiosks in post offices, USPS’s website, and CPUs, among others—increased by about $1.6 billion from fiscal years 2007 to 2011 while post office revenues decreased by $3 billion. (See fig. 1.) During this same period, USPS’s share of total retail revenues from all retail alternatives increased from about 24 percent to 35 percent. USPS projects that by 2020, retail alternatives will account for 60 percent of its retail revenues. Given this growing importance and USPS’s planned-retail-network restructuring, we recommended in November 2011 that USPS implement a strategy to guide efforts to modernize its retail network that addresses both post offices and retail alternatives. According to USPS officials, USPS is currently in the process of finalizing its retail strategy. The retail alternatives most similar to post offices are CPUs. They are privately owned, operated, and staffed and are usually colocated with a primary business, such as a convenience store or supermarket. They provide most of the same products and services as post offices (see fig. 2) at the same prices. CPUs typically have a counter with prominently displayed official USPS signage, provided by USPS, giving the CPU the look of a post office. (See fig. 3.) According to USPS, CPUs offer potential service and financial benefits, and, as we have previously reported, some foreign posts have successfully used private partnerships similar to CPUs to realize such benefits. CPUs can enhance service by being located closer to customers’ homes and workplaces and operating at hours when post offices may not be open. They can alleviate long lines at existing post offices and provide postal services to areas with rapid population growth or where opening new post offices may be cost prohibitive. Regarding financial benefits, USPS has reported that the costs it incurs for CPUs are less than those it incurs for post offices, relative to revenue earned. USPS estimated that in fiscal year 2011, it incurred $0.17 in costs for each dollar of revenue at CPUs and $0.51 in costs for each dollar of revenue at post Costs are lower, in part because CPU operators, and not USPS, offices.are responsible for their operating costs, such as rent, utilities, and wages for their employees. CPUs provide all their revenues from postal products and services to USPS, and USPS compensates CPUs for providing postal services under the terms of their contracts. The amount of compensation USPS pays to a CPU operator depends in large part on the type of contract the CPU operates under. Currently there are two basic types of contracts: fixed-price, under which USPS compensates the CPU a contractually determined amount regardless of sales, and performance-based, under which USPS compensates the CPU a contractually determined percentage of sales. USPS’s compensation to CPUs—either the amount under a fixed-price contract or the percentage under a performance-based contract—is specific for each CPU contract and the result of negotiation between USPS and the CPU operator. CPU hours of service are also negotiated for each contract, although USPS guidance on CPUs, in line with USPS’s goal for CPUs to provide increased access and convenience, states that their days and hours of service should exceed those at post offices. Other terms and conditions are standardized in all contracts. For example, all CPUs are required to offer the same basic set of products and services such as stamps, Priority Mail, Express Mail, and Certified Mail. In addition, all CPUs are contractually prohibited from selling services, including private mailboxes and others, that are competitive with USPS’s products, and all CPU contracts specify USPS’s rights to inspect the CPU at any time during operating hours. CPU contracts are valid for an indefinite period, but CPU contracts specify that the CPU operator or USPS can terminate a contract and close the CPU at any time with 120 days notice. USPS management and oversight of CPUs, including identifying and justifying the need for new CPUs, is done at the district and local levels. Staff at the district and local levels oversee day-to-day operations of CPUs and identify the need for new CPUs. When a district identifies the need for a new CPU, it approaches local businesses in the targeted area as potential partners and engages in a competitive application process. USPS has other partnerships with private entities to provide retail postal services, similar to CPUs. USPS launched a retail partnership called the Village Post Office in July 2011 in which existing small businesses provide a limited range of postal products and services in small communities where underutilized yet costly post offices may close, be consolidated with another nearby post office, or have their hours of service reduced. partnerships with national and regional retailers to provide postal services. These partnerships differ from CPUs in that they will not be subject to the same prohibitions as CPUs on selling competing services, and with them, USPS is attempting to expand access at a national or regional level as opposed to addressing specific local needs as CPUs do. Village Post Offices sell a more limited range of USPS products and services than CPUs do. Local USPS staff solicit a local business for a Village Post Office opportunity and a USPS contracting officer agrees to enter into a contract if they believe that the terms and conditions of that Village Post Office present a best value to USPS. USPS plans to launch these partnerships in test markets in early 2013 and will evaluate the effectiveness of these partnerships before making decisions whether to expand the program. Although total number of CPUs has decreased in recent years, USPS continues to use CPUs to provide customers with access to postal services at additional locations and for more hours of service. CPUs are located in a variety of locations, both urban and rural, and range from very close to far from post offices, demonstrating how USPS uses CPUs to provide customers with alternatives located near crowded post offices—which are often found in urban areas— and to provide service where post offices are not conveniently located or may not be cost effective for USPS, often in rural areas. In addition, CPUs allow USPS to provide customer access at times often beyond the hours of service at post offices. According to USPS data, the number of CPUs fell from 5,290 in 2002 to 3,619 in 2011. During the past 5 fiscal years, USPS has opened new CPUs, but a higher number of CPUs have closed. (See table 1.) According to USPS headquarters officials who manage the CPU program, economic conditions forced many businesses that operated CPUs to close and declining mail volume and sales of postal products have been the primary factors behind the decrease in the number of CPUs. Although USPS does not track specific reasons for CPU closures in its contract postal unit technology (CPUT) database, retail managers in eight USPS districts that we met with cited specific local issues resulting in CPU closures, including the following: The CPU operator retired or otherwise stopped working. For example, an Indiana CPU operator closed his primary business and moved out of the area. The CPU operator moved the primary business to a new location and did not retain the CPU. For example, the operator of a CPU in Texas moved his primary business across the street, but the new space was too small to host a CPU. The CPU operator sold the primary business. For example, a California CPU operator sold his self-storage business, and the new operators were not interested in maintaining the CPU. The CPU operator chose to close for financial considerations. For example, a CPU operator in Virginia closed the CPU because he felt it did not help his primary business. USPS initiated the closure because the CPU failed to meet the terms of the contract or USPS determined that the CPU was not cost effective. For example, USPS determined that a Maryland CPU that operated out of a private residence no longer brought in enough revenue to justify USPS’s compensation to the CPU, so USPS closed the CPU. Consistent with USPS’s goal to use CPUs to absorb excess demand at post offices, our analysis of the distance between CPUs and post offices shows that more than 56 percent of CPUs are less than 2 miles from the nearest post office and 26 percent are less than 1 mile. (See table 2.) For example, USPS opened a CPU in Frederick, Maryland, to better meet demand and reduce customer wait times in lines at the local post office about one-half mile away. Conversely, about 14 percent of CPUs are located 5 miles or more from the nearest post office, showing how CPUs can be used to provide services where post offices are not conveniently located, such as a CPU in rural Vigo Park, Texas, that is located 16 miles from the nearest post office. Similarly, USPS opened a CPU in Aubrey, Texas, located about 5 miles from the nearest post office, in order to serve customers in a fast growing area. Consistent with the majority of CPUs’ being within 2 miles of a post office, CPUs are also more likely to be in urban than rural areas, and recent CPU openings further demonstrate this pattern. As shown in figure 4, more than 60 percent of CPUs active as of March 30, 2012, were in urban areas, as defined by the Rural-Urban Commuting Area codes we used for this analysis.CPUs to reduce the time customers have to wait in line at a post office, more often in urban areas. Furthermore, more than three-fourths of new CPUs in fiscal year 2011 were in urban locations. This suggests that CPUs may be most viable in urban areas with higher populations and customer traffic. Our analysis shows that CPUs are rarer in suburban, large-town rural, and small-town rural locations. In recent years, USPS has intentionally shifted its means of compensating CPUs from fixed-price contracts—in which compensation to CPUs is a fixed amount regardless of sales—to performance-based contracts—under which compensation to CPUs is a percentage of the CPU’s postal sales—resulting in potentially greater revenue and less financial exposure to USPS. (See fig. 7.) According to USPS officials, since 2002, USPS has entered into performance-based contracts for most new CPUs and has converted many fixed-price contracts to performance-based. The purpose of the shift is to incentivize CPU operators to market postal products and services to increase postal revenues. CPUs with fixed-price contracts have limited incentive to sell more postal products, since their compensation is the same regardless of their sales. Furthermore, since USPS compensates CPUs with performance-based contracts a percentage of the CPU’s sales, USPS does not compensate these CPUs more than it receives in revenues, a situation that can happen with CPUs with fixed-price contracts. The total revenues USPS received from sales of postal products and services at CPUsyear 2007 to $611 million in fiscal year 2011, as shown in figure 8. However, as mentioned earlier, USPS’s revenues from post offices declined about 22 percent during this period. The decline in CPU revenues is part because of the decrease in the number of CPUs, as average CPU revenues decreased only 2 percent during this time. The downward trend in mail volume was also a factor, according to USPS officials. Several CPUs we visited experienced declining sales in recent years. For example, a CPU in Cedar Lake, Indiana, saw CPU revenues decline 17 percent from fiscal year 2007 to 2011. Several USPS district retail managers cited CPUs that closed because of low sales. For example, a CPU in Texas closed because neither the CPU nor the primary business generated sufficient revenue for the operator to stay in business. Our analysis of USPS data found that CPUs with lower than declined about 9 percent from $672 million in fiscal average revenues were more likely to close than were those with higher revenues. On average, CPUs that closed from fiscal years 2008 to 2011 generated roughly 26 percent less revenue on average in the year prior to closure than the average CPU revenue for that year. Individual CPU revenues vary widely, as shown in figure 9. On average, USPS’s revenue from individual CPUs averaged about $160,000 in revenue in fiscal year 2011, but a substantial number (41 percent) generated less than $50,000. Moreover, low revenue CPUs are more likely to be located in rural areas where population is sparse and demand for services is lower; 22 percent of small-town rural and large-town rural CPUs had revenues under $5,000 in fiscal year 2011. High-revenue CPUs—such as the 7 percent that earned $500,000 or more in fiscal year 2011—are mostly located in urban areas where demand is likely higher and post offices are more likely to have long wait times. For instance, we visited one CPU in downtown Los Angeles with $1.8 million in revenues in fiscal year 2011. The ability to generate high revenues at this CPU led it to increase capacity by adding postal windows to keep pace with demand. USPS compensation to CPUs increased about 6 percent from $75.4 million in fiscal year 2007 to $79.9 million in fiscal year 2011. However, USPS compensation to CPUs has decreased every fiscal year from 2008 to 2011. (See fig. 10). According to USPS officials, the increase in compensation from fiscal years 2007 and 2008 was because of larger numbers of performance-based contracts, fewer public service contracts, which are generally less expensive, individual CPUs’ petitions for increased compensation because of increased cost of doing business, and economic conditions. The subsequent decline in USPS compensation to CPUs from fiscal years 2008 to 2011 was because of declining numbers of CPUs during the time. As with CPU revenues, USPS compensation to individual CPUs varies widely. (See fig. 11.) For example, 326 CPUs received no more than $100 in annual compensation in fiscal year 2011. On the other hand, that same year, 55 high-revenue CPUs with performance-based contracts received over $100,000 in compensation. In fiscal year 2011, USPS compensated CPUs an average of about $21,000, but compensated more than a quarter of CPUs less than $5,000. As USPS undertakes actions to achieve a sustainable cost structure, it will be important to understand the implications of CPUs for USPS’s costs and revenues. Currently, USPS retains most of the revenues generated by CPUs, its major expense being compensation payments to CPU operators. As we described previously, in fiscal year 2011, USPS earned a total of $610.5 million in revenues from CPUs and, in return, compensated CPUs a total of $79.9 million, allowing USPS to retain $530.6 million in CPU revenues. Measured in another way, after compensating CPUs, USPS retained $0.87 of every dollar of CPU revenues. However, for individual CPUs, the amount of revenues USPS retains after compensating the CPU varies significantly. USPS’s target for individual CPUs is to retain, after compensation, $0.80 for every dollar in revenues. percent of the roughly half of CPUs that have fixed-price contracts. (See fig. 12.) Moreover, for 23 percent of CPUs with fixed-price contracts in fiscal year 2011, USPS did not retain any revenues as it compensated the CPU an amount greater than the revenue USPS received from the CPU. Most of these CPUs were in rural areas. Forty-nine percent of small-town rural CPUs with fixed-price contracts generated less revenue for USPS than the compensation USPS provided in fiscal year 2011. According to USPS officials, while USPS does not retain any revenue from these CPUs after compensating them, operating a post office in the same locations would be more onerous from a cost perspective. Because USPS compensates the roughly 45 percent of CPUs with performance- based contracts with a percentage of their sales—usually between 9 and 12 percent—USPS’s revenues from CPUs with performance-based contracts will, by definition, always be greater than the amount of USPS compensation to them. USPS officials said that they review CPUs in which USPS retains less than $0.80 per dollar of revenue and attempts to take action to decrease CPU compensation or terminate the CPU if necessary. USPS is embarking on a substantial makeover of its retail network, including reducing hours of service at thousands of underutilized post offices and expanding the use of retail alternatives through partnerships with national and regional retailers. According to USPS officials, at this time there are no plans to strategically increase the number of CPUs to help enhance service in the changing postal retail landscape. USPS officials said that they plan to continue to use CPUs to meet specific local needs identified by local and district officials. At the same time, pending legislation in the Senate would require USPS to consider opening CPUs as replacements for post offices that it closes.pared down its plans to close post offices by instead reducing their hours, to the extent that USPS closes post offices in the future, this requirement may put more pressure on USPS to open more CPUs. Furthermore, some district retail managers we spoke with said that they see a potentially larger role for CPUs in the future as USPS transforms its traditional retail network. However, we identified a number of challenges USPS might face in its future use of CPUs: Limited Potential Business Partners. USPS may face limited private interest in opening CPUs in certain areas. USPS planned to open thousands of Village Post Offices, which, similar to CPUs, involve partnerships with private businesses, by the end of 2012. However, as of August 20, 2012, USPS has opened only 41 Village Post Offices in part because of a lack of interested private parties. USPS officials said that this lack of interested parties is because in some rural areas, there may not be any businesses to host a Village Post Offices and in other rural areas, businesses may not want to partner with USPS in what some communities may perceive as a reduction in services they receive. In addition, some district retail managers told us there are a number of reasons that some interested businesses do not become CPUs, including financial instability and not wanting to meet the conditions of new CPU contracts, such as space requirements or prohibiting sales of competitors’ products and services. As a result, district staff are not always able to open as many new CPUs as they would like. Limited Staff Resources in USPS Districts. As we have previously mentioned, local and district-level USPS officials identify and justify the need for new CPUs, determining when and where to approach businesses as potential CPU partners. Some USPS district retail managers we spoke with told us that although there are unmet needs for CPUs in their districts, compared to prior years, they now have fewer staff and less time to seek out opportunities for new CPUs. Given the resources required to seek opportunities and open new CPUs, USPS may be unable to meet all local needs for CPUs with existing resources. Risk of Service Disruptions from CPU Closures. Because CPUs can close at any time—unlike post offices, which must undergo a lengthy review process including a public comment period prior to closure—there is a risk in relying on CPUs to provide service, especially in underserved areas where there may be a limited number of potential CPU partners and other post office alternatives. As discussed earlier, CPU operators can decide to close their CPUs for a variety of reasons. Although CPU contracts require CPUs to provide 120 days notice to USPS before closing, some district retail managers we spoke with said that CPU operators often provide much less notice, often as little as one week. Given the other challenges in opening new CPUs, USPS may have trouble replacing the lost service from unexpectedly closed CPUs. CPUs can play an important role in helping USPS provide universal service as it cuts costs to improve its financial condition—at times two conflicting goals. CPUs can help USPS reach customers in convenient locations during convenient hours at a potentially lower cost than through post offices. USPS data show that an increasing proportion of retail revenue is generated through channels other than post offices, which indicates a growing level of customer acceptance of these non-traditional means of accessing postal services. While USPS plans to continue to use CPUs as one alternative to post offices to fill local needs for postal services, it is exploring planned national and regional partnerships to more broadly expand access to convenient retail alternatives nation-wide. As USPS develops these regional and national partnerships, reduces hours of service at many post offices, and continues to use CPUs to fill specific local needs, it is important for USPS to consider CPUs’ continuing role in USPS’s evolving national retail network. We recommended in November 2011 that USPS develop and implement a retail network strategy that would address USPS customer access to both post offices and retail alternatives.officials told us that as of July 2012, the agency is in the process of finalizing this retail strategy. We continue to believe, as we stated in November 2011, that it is important that such a strategy discuss how USPS plans to increase its use of retail alternatives—including CPUs— while considering significant changes to its network of post offices and the means through which it provides access to USPS’s customers. As USPS continues to develop this retail strategy, we believe that USPS can capitalize on growing acceptance of retail alternatives by using information about CPUs to inform its decisions. For example, by considering factors, such as the distance of CPUs to existing post offices, CPU hours and days of service, and USPS’s costs of compensating CPUs, USPS could better inform its retail strategy in order to make better strategic use of CPUs in its future retail network, which will likely include reduced hours at thousands of post offices. We provided a draft of this report to USPS for review and comment. USPS provided a written response (see appendix III) in which they discussed USPS’s efforts beyond CPUs to provide customers with sufficient and convenient access to its products and services through other types of partnerships and alternatives to post offices. We are sending copies of this report to the appropriate congressional committees, the Postmaster General, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at (202) 512-2834 or stjamesl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To determine how contract postal units (CPUs) supplement the U.S. Postal Service’s (USPS’s) network of post offices, we analyzed data from USPS’s contract postal unit technology (CPUT) database. This database contains information for individual CPUs, including location, contract number, revenues, compensation, CPU contract type (fixed-price or performance-based), contract termination dates, and if the CPU is in active service. Location data for each CPU in CPUT includes a physical address including city, state, and ZIP+4 code. USPS provided us these data on March 30, 2012. In determining the number of active CPUs, we encountered some duplicate CPU records. To avoid double counting, we used the CPU contract number to keep the record for only the oldest contact associated with each CPU. Based on the physical address, including ZIP+4 code, we determined the location type for each CPU by using the Department of Agriculture’s Economic Research Service’s Rural-Urban Commuting Area (RUCA) codes. RUCA codes classify a given location based on patterns of urbanization, population density, and daily commuting patterns. We classified CPU locations as one of four types: urban, suburban, large-town rural, and small-town rural. We also determined how many CPUs were located in each state using the state data in CPUT. Based on data from CPUT, we identified which CPUs closed from fiscal years 2007 to 2011. We identified which CPUs opened during this time period based on contract start dates from USPS’s Contract Authoring Management System (CAMS). Contract start dates generally do not match the date that a CPU opens because, according to USPS officials, it usually takes 4 to 6 months for a CPU to open after USPS initiates a solicitation. We determined this date to be a reasonable approximation of when a CPU opens. We determined the number of CPUs that opened or closed in each fiscal year by counting the number of contract start dates and closure dates for each year. In addition, we obtained data from USPS’s facilities database (FDB) on all post office locations including physical street address, city, state, and ZIP+4. USPS provided us these data on December 19, 2011. We determined the distance between the list of active CPUs as of March 30, 2012, and post offices as of December 19, 2011, using the latitude and longitude for each CPU and each post office and measuring the straight- line distance between the two points. We then determined which post office was closest to each CPU and by what distance. We also counted the number of post offices in each state and, along with using data on the number of CPUs in each state, to determine the number of CPUs per 100 post offices in each state. We analyzed FDB data to determine the number of hours of service per day and per week for each CPU and post office, and how many locations are open at certain times, such as on Sundays. USPS provided these data for CPUs on June 7, 2012, and for post offices on June 27, 2012. We also visited 10 CPUs in the following regions: Chicago, Illinois; Dallas- Fort Worth, Texas; Southern California; and Washington, D.C. We selected those regions and the 10 CPUs to ensure diversity in geographic location, location type (urban, suburban, and rural), CPU revenue levels, and type of CPU contract (fixed-price and performance-based). We also selected locations close to GAO office locations in order to minimize the use of travel funds by GAO staff on this engagement. During our visits, we interviewed each CPU operator. We also interviewed district retail managers in each of the USPS districts responsible for managing these CPUs. During these interviews as well as interviews with USPS headquarters staff in charge of managing the CPU program, we discussed the reasons for and benefits from using CPUs, the reasons why CPUs have closed, and factors that affect CPU revenues and compensation. We also reviewed GAO reports and USPS documents detailing the CPU program, including USPS guidance on CPUs and standard CPU contracts. To determine USPS revenue from CPUs and USPS’s compensation to them from fiscal years 2007 to 2011, we analyzed data from CPUT. We encountered numerous duplicate records for a single CPU address. To avoid double counting, we merged all financial data for a given contract number into a single record by summing up data for each unique contract number. As CPUT stores CPU revenue and compensation data on a monthly basis, we summed monthly data to determine the total revenues and USPS compensation for each CPU for each fiscal year. We determined the amount of revenues USPS retains after compensating CPUs in each fiscal year by subtracting CPU compensation from CPU revenues and dividing that by CPU revenues. Finally, we linked data from CAMS on contract start dates to this financial data from CPUT by using the contract number for each CPU. As a result, we were able to determine the revenues and USPS compensation to each CPU for CPUs that opened in each fiscal year. We did the same for closed CPUs by using CPU closure dates included in CPUT. We assessed the reliability of each of the data sources we used by interviewing responsible USPS officials about procedures for entering and maintaining the data and verifying their accuracy. We manually reviewed all data provided by USPS for any obvious outlying data. After reviewing this information, we determined that the CPUT data were sufficiently reliable for evaluating revenue and compensation trends, closure dates, and CPU locations. We did find that CPUT reported outlying data on revenues in certain months for four CPUs in fiscal year 2007. To address these outlying data, we averaged the revenues for each of the four CPU in the other months, where reported revenues seemed normal, and assumed that the CPU earned the average level of revenue in the outlying months. We determined that the CAMS data were sufficiently reliable for evaluating CPU start dates. We determined that the FDB post office and CPU hours-of-service data were sufficiently reliable for overall comparison purposes. As previously stated, the FDB included hours-of- service data for 3,320 CPUs as of June 27, 2012, 6.3 percent less than 3,542 CPUs indicated by our analysis of CPUT as of March 30, 2012. In discussing the discrepancy with USPS officials, we determined that there was no indication that the CPU records missing from the FDB differed from the general population and were therefore unlikely to affect the outcome of our analysis. To determine challenges USPS might face if it increases its use of CPUs, we reviewed relevant legislation, USPS documents related to managing CPUs, prior GAO reports, and USPS Office of Inspector General reports. We also interviewed USPS officials responsible for implementing the CPU program, CPU operators, and USPS district retail managers at the sites and districts discussed earlier regarding current CPU operations and challenges the CPU program might face going forward. Table 5 provides the number of Contract Postal Units (CPUs) and post offices in each state, as well as the number of CPUs per 100 post offices in each state as a measure of how reliant each state is on CPUs for providing access to postal services. Lorelei St. James, (202) 512-2834 or stjamesl@gao.gov. In addition to the individual named above, Heather Halliwell, Assistant Director; Patrick Dudley; John Mingus; Jaclyn Nelson; Joshua Ormond; Matthew Rosenberg; Amy Rosewarne; Kelly Rubin; and Crystal Wesco made key contributions to this report.
USPS’s declining revenues have become insufficient to cover its costs. Its strategies to address losses include reducing hours of service at many post offices and expanding the use of post office alternatives, including CPUs.CPUs are independent businesses compensated by USPS to sell most of the same products and services as post offices at the same price. Although CPUs can provide important benefits, the number of CPUs has fallen from 5,290 in fiscal year 2002 to 3,619 in fiscal year 2011. As requested, this report discusses: (1) how CPUs supplement USPS’s post office network, (2) USPS’s revenue from CPUs and compensation to them from fiscal years 2007 to 2011, and (3) challenges USPS might face if it increases its use of CPUs. GAO analyzed USPS data on CPU locations, revenues, compensation, and hours of operation as well as on post office locations and hours of operation. GAO interviewed CPU owners and USPS staff in charge of managing CPUs. Although contract postal units (CPUs) have declined in number, their nationwide presence in urban and rural areas supplements the U.S. Postal Service's (USPS) network of post offices by providing additional locations and hours of service. More than 60 percent of CPUs are in urban areas where they can provide customers nearby alternatives for postal services when they face long lines at local post offices. Over one-half of CPUs are located less than 2 miles from the nearest post office. Urban CPUs are, on average, closer to post offices than rural CPUs. CPUs are also sometimes located in remote or fast-growing areas where post offices are not conveniently located or may not be cost effective. CPUs further supplement post offices by providing expanded hours of service. On average, CPUs are open 54 hours per week, compared to 41 hours for post offices. In addition, a greater proportion of CPUs than post offices are open after 6 p.m. and on Sundays. These factors are important as USPS considers expanding the use of post office alternatives to cut costs and maintain access to its products and services. Total USPS revenues from CPUs fell from fiscal years 2007 to 2011, while USPS's compensation to them increased during this period; nonetheless, CPUs generated high revenues relative to USPS's compensation to CPUs. Declines in mail volumes and the number of CPUs drove revenues down 9 percent, from $672 million to $611 million from fiscal years 2007 to 2011. USPS total compensation to CPUs increased 6 percent during this period, from $76 million to $80 million; however, after increasing from fiscal year 2007 to 2008, compensation decreased every fiscal year from 2008 to 2011. According to USPS officials, the overall increase was because of increased compensation to individual CPUs and decreasing numbers of less expensive CPUs. In fiscal year 2011, after compensating CPUs, USPS retained 87 cents of every dollar of CPU revenue. USPS has a target to retain 80 cents for every dollar in revenue for individual CPUs. USPS did not meet this target at many individual CPUs-- especially ones in rural areas. In fact, 49 percent of CPUs that USPS compensates a fixed amount regardless of their sales in small-town rural areas-- where CPUs may serve as the de facto post office--generated less postal revenue than the CPUs received in compensation from USPS. CPU revenues and compensation are important factors as USPS seeks a more sustainable cost structure. Limited interest from potential partners, competing demands on USPS staff resources, and changes to USPS's retail network may pose challenges to USPS's use of CPUs. USPS has no current plans to strategically increase the number of CPUs as part of its retail network transformation. However, a number of district USPS staff charged with identifying the need for CPUs told us they see a larger role for CPUs. Nevertheless, USPS may face limited interest from potential partners as many may not want to operate CPUs because of concerns over CPU contract requirements such as space requirements and prohibitions on selling products and services that compete with USPS. Many USPS district retail managers we spoke with in charge of opening CPUs said that finding partners to operate CPUs could be difficult. Furthermore, many of these managers said that they now have fewer staff and less time and, as a result, do not have the resources to manage opening CPUs to meet the need they have identified. GAO previously recommended that USPS develop and implement a plan to modernize its retail network. GAO is not making any new recommendations at this time, but believes that it is important for USPS to consider the role of CPUs as USPS works to develop and implement its retail network plan and control costs. In commenting on a draft of this report, USPS provided information on its efforts to provide convenient access to its products and services.
The voucher program is not an entitlement program. As a result, the amount of budget authority HUD requests and Congress provides through the annual appropriations process limits the number of households that the program can assist. Historically, appropriations for the voucher program (or for other federal housing programs) have not been sufficient to assist all households that HUD has identified as having worst-case housing needs—that is, unassisted households with very low incomes that pay more than 50 percent of their incomes in rent, live in substandard housing, or both. In 2009, 41 percent of the more than 17 million very low-income renters had worst-case housing needs, according to HUD. The primary problem affecting these renters was rent burden— approximately 97 percent paid more than 50 percent of their incomes in rent. To be eligible for assistance, in general, households must have very-low incomes—not exceeding 50 percent of the area median income, as determined by HUD. Under the Quality Housing and Work Responsibility Act of 1998 (P.L. 105-276), at least 75 percent of new voucher program participants must have extremely low incomes—not exceeding 30 percent of the area median income. Under the voucher program, an assisted household pays 30 percent of its monthly adjusted income in rent; the remainder of the rent is paid through a HUD-subsidized “voucher,” which generally is equal to the difference between (1) the lesser of the unit’s gross rent (generally, rent plus utilities) or a local “payment standard” and (2) the household’s payment. The payment standard is based on the HUD-determined fair market rent for the locality, which generally equals the 40th percentile of market rents (including utilities) recent movers paid for standard-quality units. HUD annually estimates fair market rents for metropolitan and nonmetropolitan areas. Housing agencies—the state and local agencies that administer the voucher program on HUD’s behalf—can set payment standards (that is, pay subsidies) between 90 percent and 110 percent of the fair market rent for their areas. By determining fair market rents and setting payment standards at a rate sufficient to provide acceptable choices for voucher program participants, HUD and housing agencies essentially set the upper and lower bounds on the cost of typical, standard-quality units that voucher holders rent. Participants in the voucher program can choose to live in units with gross rents that are higher than the payment standard, but they must pay the full difference between the unit’s gross rent and the payment standard, plus 30 percent of their income. In 2011, more than 2,400 housing agencies administered more than 2.2 million vouchers—their programs ranged in size from more than 96,000 vouchers to fewer than 5. Housing agencies are responsible for inspecting units, ensuring that rents are reasonable, determining households’ eligibility, calculating and periodically re-determining households’ incomes and rental payments, and making subsidy payments to landlords.functions, such as establishing and maintaining a waiting list, processing tenant moves, conducting landlord and tenant outreach, and reporting to HUD. HUD disburses appropriated funds to housing agencies for subsidy payments to landlords and administrative expenses. In addition, housing agencies perform basic program Each year, Congress appropriates funding for subsidies for renewal (existing) and incremental (new) vouchers and administrative expenses. As part of the appropriations process, Congress outlines a formula that determines the amount of renewal funding for which housing agencies are eligible (“eligible amount”). However, the amount Congress appropriates to the voucher program may not equal the total amount for which housing agencies are eligible under the formula. HUD is responsible for allocating program funding (“appropriated amount”) among housing agencies based on their eligible amounts. To the extent that the appropriated amount does not fully fund housing agencies at their eligible amounts, HUD reduces the funding housing agencies receive to fit within the appropriated amount. Housing agencies are expected to use all the subsidy funding HUD allocates for authorized program expenses (including subsidy and utility payments). However, if housing agencies’ allocated amounts exceed the total cost of their program expenses in a given year, they must maintain their unused subsidy funds in NRA (reserve) accounts. Housing agencies may use their NRA balances (subsidy reserves) to pay for authorized program activities in subsequent years. Incremental vouchers include various special-purpose vouchers. Congress appropriates funding for these vouchers in separate line items in the budget, which distinguish them from renewal vouchers. Housing agencies must apply to HUD to receive allocations of and funding for the special-purpose vouchers, which, as described in table 1, include Enhanced, Tenant Protection, Family Unification Program, Mainstream, Nonelderly Disabled, and Veteran Affairs Supportive Housing vouchers. These vouchers may have different or additional eligibility and operational requirements than renewal vouchers. After the first year, special-purpose vouchers generally become renewal vouchers for purposes of determining funding eligibility in the next year, but HUD may require that housing agencies separately track them as special-purpose vouchers. Congress appropriates funding for administrative fees, and the formula used to calculate the administrative fee generally is based on fair market rents, adjusted annually to reflect changes in wage rates. HUD pays fees to housing agencies based on the number of units leased (vouchers used) as of the first of each month. HUD pays one (higher) rate for the first 600 units under lease and a second (lower) rate for the remaining units. As with subsidy funding, if the appropriated amount does not fully cover housing agencies’ fees as determined by the formula, HUD will reduce the amount of funding each housing agency receives to fit within the appropriated amount. Since fiscal year 2006, administrative fees have accounted for less than 10 percent of total voucher program funding. Some housing agencies that administer vouchers can participate in and receive funding under MTW, a demonstration program authorized by Congress in 1996 and implemented by HUD in 1999. MTW allows participating housing agencies to test locally designed housing and self- sufficiency initiatives in the voucher and other federal housing programs. Housing agencies may waive certain statutes and HUD regulations to achieve three objectives: (1) reduce cost and achieve cost-effectiveness in federal expenditures; (2) give incentives to families with children whose heads of household are working, seeking work, or in job training, educational or other programs that assist in obtaining employment and becoming economically self-sufficient; and (3) increase housing choices for low-income families. MTW agencies also have “funding flexibility”— they may use their program-related funding (including voucher subsidy funding) and administrative fees for any purpose (programmatic or administrative). Currently, 35 housing agencies participate in MTW— according to HUD, they administer about 15 percent of all vouchers and account for approximately 16 percent of all subsidy and administrative fee funding on an annual basis. Congress and HUD fund MTW agencies pursuant to their MTW agreements; however, the agencies could have subsidies and administrative fees reduced if the amounts Congress appropriated were less than the housing agencies’ eligible amounts under the formulas. Several factors affected voucher program costs (as measured through congressional appropriations, HUD outlays, and housing agencies’ expenditures) and in some cases contributed to cost increases from 2003 through 2010, including: (1) increases in subsidy costs for existing vouchers, (2) subsidy costs for new vouchers, and (3) administrative fees paid to housing agencies. In addition to these factors, the design and goals of the voucher program, such as requirements to target assistance to certain households, contributed to overall program costs. Despite increases in the cost of the program from 2003 through 2010, our work and other published studies have found that vouchers generally have been more cost-effective in providing housing assistance than federal housing production programs designed to add to or rehabilitate the low- income housing stock. In addition, Congress and HUD have taken several steps to manage cost increases over the period. Several factors affected increases in congressional appropriations, HUD outlays, and housing agencies’ expenditures in the voucher program from 2003 through 2010. As shown in table 2, from fiscal years 2005 through 2011, voucher program appropriations increased from approximately $14.8 billion to $18.4 billion (approximately 4 percent annually). Over the same period, outlays—funding HUD disburses to housing agencies for program expenses—increased from $10 billion to $18.6 billion (approximately 11 percent annually). Information on appropriations and outlays for the voucher program were not available for 2003 and 2004 because HUD did not report this information separately from other rental assistance programs. Once disbursed, housing agencies expend program funds on activities such as making subsidy payments to landlords and for administrative expenses. As shown in figure 1, from 2003 through 2010, housing agencies’ expenditures increased from approximately $11.7 billion to $15.1 billion (about 4 percent annually). Expenditure data for 2011 were not available at the time we were conducting our review. HUD’s outlays and housing agencies’ expenditures can differ somewhat in any given year because of differences in the timing of payments and fluctuations in the rate of funding utilization—that is, some housing agencies may not use all of their apportioned funds and may build reserves. Later in this report we discuss the extent to which housing agencies have accumulated subsidy reserves and steps Congress and HUD could take to reduce future budget requests or reallocate the reserve funds. As shown in table 3, housing agencies’ expenditures increased by a total of about 28.9 percent in nominal dollars from 2003 through 2010. Once adjusted for inflation, housing agencies’ expenditures increased by a smaller rate, approximately 8.8 percent. (We evaluated expenditure after adjusting for the general effects of inflation using a broad base index of price changes for all goods and services. We expressed expenditures in 2011 constant dollars, the latest year for which complete data on price changes are available.) In the sections below, we discuss how (1) increases in subsidy costs for existing vouchers, (2) subsidy costs for new vouchers, and (3) administrative fees paid each contributed to the nominal and constant dollar increases in voucher program costs from 2003 through 2010. As shown in table 3 above, in nominal terms, subsidy costs for existing vouchers grew by of 19.5 percent, accounting for a majority of the increase in housing agencies’ expenditures from 2003 through 2010. After adjusting for inflation, subsidy costs for existing vouchers grew by a small amount (2.4 percent) and were a smaller contributor to the total increase in expenditures. Two factors generally explain the remaining increase in expenditures for existing vouchers after adjusting for inflation—changes in market rents and household incomes. As previously discussed, the subsidies HUD and housing agencies pay to landlords on behalf of assisted households are based on market rents and household incomes. As a result, changes in market rents and household incomes affect subsidy cost. As shown in figure 2, in 2011 constant dollars, median gross rents for voucher-assisted households increased from about $850 to $880 (or 4 percent) over the period. Growth in rents outpaced the rate of general inflation. As rents increase, HUD and housing agencies must pay larger subsidies to cover the increases, assuming no changes to household incomes or contributions to rent. Housing agencies we contacted reported that this increase in rental prices can be explained, in part, by increased demand and competition for affordable housing—for example, some noted that the number of renters has increased as a result of an increase in the number of foreclosures in recent years. National vacancy rates—an indicator of the relative tightness of the rental market—decreased from 2009 to 2010. Further, as figure 3 shows, in 2011 constant dollars, the median annual income of voucher-assisted households contracted from about $11,000 to $10,700 (a decrease of about 3 percent) from 2003 through 2010. Over the period, incomes of assisted households did not keep pace with the rate of general inflation. As incomes decline, voucher-assisted households are paying less towards rent, requiring larger subsidies to cover the difference between the rents and tenant payments. More than half of the housing agencies we contacted reported that job loss and wage reductions contributed to in their subsidy costs over the period of our analysis. One housing agency in California we contacted also reported that state cuts to social welfare programs, including those that provide direct cash assistance, lowered incomes for some households and therefore have increased subsidy costs. HUD estimated that reductions in federal welfare and disability income payments have resulted in monthly subsidy payment increases of $17 and $5, respectively, for households that receive those forms of assistance. The increase in the number of households assisted with vouchers (that is, subsidy costs for new vouchers) from 2003 through 2010 was another important contributor to the program’s rising costs. As table 3 shows, in nominal dollars, subsidy costs for new vouchers grew by 5.3 percent over the period. After adjusting for inflation, the addition of new vouchers grew by 4.4 percent, accounting for half of the overall increase in housing agencies’ expenditures over the period. Congress increased the size of the program through the addition of special-purpose vouchers such as Enhanced, Tenant Protection, Family Unification Program, Mainstream, Nonelderly Disabled, and Veteran Affairs Supportive Housing (see table 1 for a description of each of these types of vouchers). HUD was unable to provide the data necessary to determine the extent to which each type of voucher contributed to the growth in program expenditures during this period. Finally, growth in the fees paid to housing agencies to administer the voucher program grew about 4.1 percent in nominal dollars from 2003 through 2010 (see table 3). In constant dollar terms, administrative fees grew by 2 percent over the period. The formula HUD uses to pay administrative fees to housing agencies is not directly tied to the cost of performing the administrative tasks the program requires. Moreover, the fees HUD has paid housing agencies in recent years have been less than the amount for which they were eligible under the formula because of reductions in appropriations. Housing agencies we contacted noted that the cost of doing business increased over the period of our analysis. For example, several noted that inspection costs have increased with the growing cost of gasoline, especially for housing agencies that administer vouchers over large geographic areas. Others noted that policies such as portability—the ability of voucher holders to use their vouchers outside of the jurisdiction of the housing agency that issued the voucher—increased staff costs because they have been increasingly complex and difficult to implement and monitor. Representatives of housing agencies with whom we spoke said that they have frozen salaries and hiring and increased staff hours, among other things, to cope with reductions in administrative fees. The design and goals of the voucher program contribute to the overall expense of the voucher program, although it is difficult to quantify how much of the cost increase from 2003 through 2010 was due to design issues. Specifically, the voucher program has various features that are intended to target or give priority to the poorest households, and serving these households requires greater subsidies. Long-standing federal policy generally has required household contributions to rent to be based on a fixed percentage of household income, which can be reduced through income exclusions and deductions for certain expenses, such as child care and health services. Further, housing agencies are required to lease 75 percent of their new vouchers to extremely low-income households. In addition, housing agencies also may establish local preferences for selecting applicants from their waiting lists. Like the income standards and targeting requirements, these preferences have a direct impact on subsidy costs—for example, the Boston Housing Authority has established preferences designed to assist “hard-to-house” individuals and families, including those experiencing homelessness. According to housing agency officials, because these individuals and families have little to no income, the agency’s annual per-unit subsidy costs are higher than they would be absent the preferences. While these types of requirements help address Congress’s and HUD’s goal of serving the neediest households, HUD officials noted that such requirements make the program more expensive than it would otherwise be if housing agencies had more flexibility to serve households with a range of incomes. Similarly, program goals, such as HUD’s deconcentration policy also can affect program costs. Specifically, this policy encourages assisted households to rent units in low-poverty neighborhoods, which typically are more expensive. According to HUD officials, the deconcentration goal increases subsidy costs for housing agencies and overall costs for the department because, as previously discussed, if rents increase, but household contributions to rent remain constant, HUD and housing agencies must make up for the increased rent burden in the form of higher subsidy payments. Despite increases in the cost of the voucher program from 2003 through 2010, our work and other published studies consistently have found that vouchers generally have been more cost-effective in providing housing assistance than federal housing production programs designed to add to or rehabilitate the low-income housing stock. Our 2002 report provides the most recent original estimates of the cost differences between the voucher program and certain existing production programs. We estimated that, for units with the same number of bedrooms in the same general location, the production programs cost more than housing vouchers. In metropolitan areas, the average total 30-year costs of the production programs ranged from 8 to 19 percent greater for one- bedroom units. For two-bedroom units, the average total 30-year costs ranged from 6 percent to 14 percent greater. The cost advantage of the voucher over the production programs was likely understated because other subsidies—such as property tax abatements—and potential underfunding of reserves to cover expected capital improvements over the 30-year cost period were not reflected in the cost estimates for the production programs. Much of the research over the past several decades reached similar conclusions. For example, in 2000, HUD found that average ongoing costs per occupied unit of public housing were 8 to 19 percent higher than voucher subsidy costs. In 1982, the President’s Commission on Housing found that subsidy costs for new construction were almost twice as much as subsidy costs for existing housing. The commission’s finding set the stage for the eventual shift from production programs to vouchers as the primary means through which the federal government provides rental housing assistance. Notwithstanding the cost-effectiveness of vouchers relative to other forms of rental housing assistance, many of these studies noted the benefits that production programs can and have conferred on low-income households and communities such as supportive services for the elderly and persons with disabilities. The voucher program typically does not confer such benefits. In addition, research has indicated that some markets may have structural issues. For example, regulatory restrictions that reduce the supply of housing (and thus, opportunities for households to use vouchers) make production programs more effective tools for providing affordable housing than vouchers in those locations. And our work found that voucher holders sometimes are unsuccessful in using their vouchers, either because they cannot find units that meet their needs or because landlords are unwilling to accept their vouchers. These households may benefit more from production programs, which can better guarantee access to affordable housing, than vouchers. In light of increasing program costs, Congress and HUD have taken several steps to limit the extent of increases from fiscal years 2003 through 2011, while maintaining assistance for existing program participants. These steps include legislative changes to the formula HUD uses to calculate and distribute subsidy funding to housing agencies, as well as continued efforts to reduce improper rental assistance payments. Before fiscal year 2003, Congress and HUD funded housing agencies’ renewal needs based on their average per-voucher costs from the previous year, adjusted for inflation, and multiplied by the number of authorized vouchers. Meaning, housing agencies received funding for all of their authorized vouchers, regardless of whether they leased all of those vouchers. In addition, prior to 2003, HUD provided each housing agency with reserve funding equal to one month of its subsidy funding— housing agencies could use their reserves to fund new vouchers (a practiced called “maximized leasing”). Beginning in fiscal year 2003, Congress changed the voucher program’s funding formula so that it would provide housing agencies with renewal funding that was tied to housing agencies’ actual costs and leasing rates rather than the number of authorized vouchers (whether used or unused). Starting in fiscal year 2003, Congress stopped providing funding for vouchers that housing agencies issued in excess of their authorized levels, thus prohibiting over- (or maximized) leasing. Congress generally based voucher program appropriations for fiscal year 2003 and thereafter on the number of leased vouchers (not to exceed authorized levels) and actual cost data that housing agencies reported to HUD. Congress discontinued the practice of providing reserve funding for housing agencies and instead started reserving a portion of renewal funding to make adjustments to housing agencies’ allocations for contingencies such as increased leasing rates or certain unforeseen costs. In more recent years, Congress has provided HUD appropriations that did not fully fund housing agencies at their eligible amounts under the funding formula. In every year since 2004, Congress has provided administrative fees that were at least 6 percent lower than the 2003 rate. Finally, as shown in table 4, in fiscal years 2008 and 2009, Congress rescinded a portion of housing agencies’ subsidy reserves and directed HUD, in total, to offset almost $1.5 billion from 1,605 housing agencies). HUD has taken steps to reduce improper payments in the voucher program. According to HUD reports, the department has reduced gross improper payments (subsidy over- and underpayments) resulting from program administrator errors (that is, a housing agency’s failure to properly apply income exclusions and deductions and correctly determine income, rent, and subsidy levels) by almost 60 percent, from $1.1 billion in fiscal year 2000 to $440 million in fiscal year 2009. In addition, HUD has provided housing agencies with fraud detection tools—such as the Enterprise Income Verification system, which makes income and wage data available to housing agencies—and realized continued reductions in improper payments as a result of these tools. According to HUD, from fiscal year 2006 through 2009, the department reduced gross improper payments resulting from errors in reported tenant income—including the tenant’s failure to properly disclose all income sources—by approximately 37 percent, from $193 million to $121 million. These efforts do not necessarily reduce the cost of assisting households, but they help increase the program’s efficiency by helping ensure that an appropriate level of assistance is provided and potentially serving more households with appropriated funds. HUD has requested the authority to implement program reforms that could have had the potential to decrease voucher program subsidy costs, administrative costs, or both. For example, as shown in table 5, in its fiscal year 2012 budget request, HUD proposed implementing a rent demonstration to test alternatives to the current rent structure that could result in assisted households paying more in rent. As we discuss later in this report, changes to the way assisted household contributions to rent are calculated could result in cost savings to the program. Although Congress did not grant HUD the authority to implement these voucher-related initiatives, HUD recently initiated administrative changes to its housing agency consortium rule, a first step in the effort to encourage housing agencies to consolidate as envisioned by the department’s 2011 Transforming Rental Assistance proposal. The revised rule would treat participating housing agencies in a consortium as one entity. HUD’s current regulation requires that consortium members be treated separately for oversight, reporting—as a result, few housing agencies have formed consortiums since 1998. Finally, in 2010, HUD began reviewing the administrative fee structure for the voucher program. The study aims to ascertain how much it costs a housing agency to run an efficient voucher program. HUD plans to use the results to help develop a new formula for allocating administrative fees. Although not enough time has passed to determine whether HUD’s findings will positively or negatively affect costs in the voucher program, this study represents a positive effort on HUD’s part to more clearly understand administrative costs in the voucher program and identify ways to improve efficiency. According to HUD officials, HUD intends to use this study as a basis for future legislative proposals, which could have implications for the cost of administering the program. Finally, in 2009, HUD developed a tool designed to help HUD staff and housing agencies forecast voucher and budget utilization (that is, the percentage of budget allocation and percentage of authorized vouchers they are using) for up to 3 years. Department officials credit the tool with increasing voucher program efficiency; however, HUD and housing agencies’ use of the forecasting tool has not reduced overall costs in the voucher program. We identified several options that if implemented effectively, could reduce voucher program costs (by approximately of $2 billion annually, based on our estimates) or allow housing agencies to assist additional households if Congress chose to reinvest the costs savings in the program. First, improved information on the level of subsidy reserve funding housing agencies should maintain could aid budget decisions and reduce the need for new appropriations. ; Second, agency officials have noted that the voucher program’s requirements are complex and burdensome and streamlining these requirements could reduce costs. Finally, changes to the calculation of voucher-assisted households’ payments toward rent— known as rent reform—and consolidating voucher administration under fewer housing agencies’ could also reduce program costs Each of these options would require congressional action to implement, and we discuss below possible steps that HUD could take to facilitate the implementation of some of them. Rent reform and administrative consolidation also involve difficult policy decisions that will affect some of the most vulnerable members of the population and alter long-standing program priorities and practices. Housing agencies have accumulated subsidy reserves (unspent funds) that Congress could use to (1) reduce program appropriations (through a rescission and offset) and potentially meet other federal needs or (2) direct HUD to assist additional households. As previously discussed, HUD allocates subsidy funding to housing agencies based on the formula Congress establishes in annual appropriations legislation. In recent years, the formula has specified that HUD allocate funds based on housing agencies’ leasing rates and subsidy costs from the prior year. In any given year, housing agencies may under-lease or receive more funding than they can spend. Unless these funds are rescinded and offset, housing agencies can keep their unused subsidy funding in reserve accounts and spend these funds on authorized program expenses (including rent subsidies and utility allowance payments) in future years. Over time, large sums of money can accumulate. As of September 30, 2011, 2,200 housing agencies had more than $1.5 billion in subsidy reserves, which includes unspent subsidy funding from prior years and certain set-aside funding and funding for new vouchers where insufficient time has passed for expenditure. In addition, beginning in 2012, HUD implemented changes to how it disburses subsidy funds to housing agencies. As a result of these changes, although housing agencies may continue to accumulate subsidy reserves, HUD, rather than the housing agencies, holds these reserves. This change also will allow HUD to better determine the extent of the reserves housing agencies have accumulated. HUD officials told us that the department believes that it requires specific authority from Congress to reduce (offset) future voucher program budget requests by all or a portion of housing agencies’ subsidy reserves. Although HUD provides quarterly reports to the Congressional Budget Office on the extent of housing agencies’ reserves and has requested the authority to offset and in some cases, redistribute “excess” reserves (that is, reserves beyond what is needed to fund contingencies, such as cost increases from rising rental rates or falling tenant incomes, as defined by HUD), the department has not developed specific or consistent criteria defining what constitutes excess reserves or how it would redistribute funding among housing agencies. For example, in its fiscal year 2012 voucher program budget proposal, HUD requested the authority to offset excess reserves. According to the proposal, if given this authority, the department first would reallocate the funds to housing agencies to make up any difference between the appropriated amount and the total funding for which housing agencies were eligible based on the renewal formula and then redistribute any remaining funds to housing agencies based on “need and performance.” However, the proposal did not specify how HUD would calculate excess subsidy reserves or a detailed methodology for redistributing the funds, and HUD officials acknowledged that redistributing excess funds among housing agencies will increase the size and the cost of the program over time because if housing agencies are able to lease more vouchers with these funds, Congress will have to appropriate more funding for renewal vouchers in subsequent years. Because housing agencies’ reserves are resources that HUD has disbursed and expended, HUD effectively recaptures any excess reserves by reducing or offsetting the housing agencies’ funding allocation in another year. percent, respectively, of housing agencies’ allocated amounts.HUD generally has excluded housing agencies with 250 and fewer vouchers from its proposed offsets. HUD officials told us that they have been considering lowering this threshold or developing a sliding scale methodology (generally based on size) to determine the amount of reserves housing agencies should maintain and the amount of excess reserves that HUD would propose offsetting and redistributing. In past work, we highlighted the importance of HUD’s clearly identifying the existence and amount of unexpended subsidy funds (reserves) so that Congress could have confidence in the department’s capacity to effectively manage the funding appropriated for the voucher program. We concluded that HUD should take steps to ensure that reserves did not reach unreasonable levels—that is, in excess of what is prudently needed to address contingencies. More recently, we stated that agency reporting about key areas such as financial management or program reforms should competently inform congressional decision making, and agencies should engage Congress about how to present this information. While a reserve for contingencies is prudent, without clear and consistent criteria for determining what constitutes an appropriate level for housing agency reserves, it is difficult to judge how well HUD managed the funding Congress has provided for the voucher program. For example, as previously discussed, in fiscal years 2008 and 2009 Congress rescinded and directed HUD to offset excess subsidy reserves. However, as shown in table 6, the 2009 rescission and offset were too large for 288 (about 18 percent) of the 1,605 housing agencies that were subject to the 2008 and 2009 rescissions and offsets to absorb. Congress had to provide these 288 and an additional 152 housing agencies with supplemental funding to prevent the termination of voucher assistance. Similarly, in the fiscal year 2012 budget, Congress rescinded and directed HUD to offset housing agencies’ subsidy reserves by $650 million. Based on our analysis, as of September 30, 2011, housing agencies had approximately $606 million in excess reserves, approximately $44 million short of the $650 million rescission amount. Our analysis assumed that housing agencies retained in reserves the equivalent of one month or about 8.5 percent of their annual funding allocations—HUD’s current thinking on the appropriate level of reserves—and also excluded certain set-aside funding and funding for new vouchers. As a result, to meet the $650 million rescission goal, HUD would have to offset more funds from housing agencies’ reserves than would be required under a one-month reserve criterion, potentially resulting in some housing agencies holding less than a one month reserve for contingencies. HUD officials have noted that certain requirements for administering the voucher program have grown burdensome and costly and could be streamlined. In May 2010, the Secretary of HUD testified that the department’s rental assistance programs “desperately need simplification.” He further stated that HUD must streamline and simplify its programs so that they are easier for families to access, less costly to operate, and easier to administer at the local level. For example, under current rules, housing agencies must re-examine household income and composition at least annually and adopt policies describing when interim re-examinations will be conducted. HUD has expressed support for extending the time between re-examination of income for households on fixed incomes from 1 to 3 years and the time between unit inspections from 1 to 2 years—according to one program administrator that manages voucher programs for five housing agencies, annual re- examinations and inspections account for more than 50 percent of administrative costs in the voucher programs the agency administers. However, overall data are not available on the actual costs of specific administrative activities, such as annual income re-examinations and inspections, and how they vary across housing agencies. To help address this lack of information, HUD has initiated a study to determine (1) what constitutes an efficient voucher program, (2) what a realistic expectation would be for what a housing agency should be doing to run an efficient program, (3) how much it costs to run an efficient program, and (4) what an appropriate formula would be for allocating administrative fees to housing agencies operating voucher programs. According HUD, the study will allow the department to analyze all aspects of voucher program administration to reduce and simplify housing agencies’ administrative responsibilities. Such information will be important as congressional decision makers consider potential reforms of administrative requirements. Although some of the changes needed to simplify and streamline the voucher program would require Congress to amend existing statutes, HUD’s administrative fee study and the experiences of housing agencies participating in MTW may provide insight into specific reforms to ease housing agencies’ reported administrative burden, as well as any potential cost savings resulting from these reforms. For example, according to a HUD report, while conclusive effects of many MTW reforms, particularly as they relate to assisted households, are not known, some of the demonstration’s most compelling results to date are those As shown in table 7, many of the related to housing agency operations. housing agencies that participate in the demonstration have implemented reforms that Congress has been considering through draft legislation, HUD has proposed in its fiscal year 2012 budget request, or both. According to the MTW agencies, many of these initiatives have resulted in both time and cost savings in their programs. In addition, and as previously discussed, the existing administrative fee formula generally is linked to fair market rents that are adjusted annually to reflect changes in wage rates, and HUD pays fees to housing agencies based on the number of units leased (vouchers used) as of the first of each month. This formula is not tied to the program’s current administrative costs or requirements. Further, housing agencies we contacted reported that the cost of administering the voucher program has been on the rise, with contributing factors including higher postage, fuel, and employee health care costs, as well as increased reporting and other requirements. Without more specific information about potential reform options, policymakers will not be able to make an informed decision about how to reform the administrative fee formula and the activities required to administer an efficient voucher program. These efforts—using the administrative fee study to identify specific reforms and leveraging the experiences of MTW agencies—are in line with the goals of the Government Performance and Results Act of 1993 (GPRA), which Congress enacted, in part, to inform its decision making by helping to ensure that agencies provide objective information on the relative effectiveness and efficiency of their programs and spending. Whether HUD’s study will yield findings that eventually will result in measureable cost or time savings is not clear. While reforming administrative requirements for the voucher programs could lead to increased efficiencies and cost savings, the administrative fee paid to housing agencies is a relatively modest share of the program’s overall annual appropriations—approximately 9 percent in recent years. Nevertheless, such efforts will provide Congress with timely and meaningful information, which will enhance its ability to make decisions about funding for and requirements of the voucher program. If implemented, rent reform (that is, changes to the calculation of households’ payment toward rent) and the consolidation of voucher administration under fewer housing agencies could yield substantial cost savings, allow housing agencies to serve additional households if Congress were to reinvest annual cost savings in the voucher program, or both.cost savings or additional households served could be greater if both options were implemented. Further, implementation of these options may involve some trade-offs, including increased rent burdens for assisted households. Further, these reform options are not mutually exclusive; that is, As previously discussed, under current program rules, an assisted household generally must contribute the greater of 30 percent of its monthly adjusted income or the housing-agency established minimum rent—up to $50—toward its monthly rent. HUD’s subsidy is the difference between (1) the lesser of the unit’s gross rent or the applicable payment standard and (2) the household’s rental payment. Therefore, as an assisted household’s income increases, HUD’s subsidy payment decreases, and vice versa. Under existing program rules, a household could pay no rent—if the household has no monthly income after adjustments, the housing agency from which the household receives assistance does not have a minimum rent, or the household obtained a hardship exemption. However, such households make up a small share of all voucher-assisted households, with more than 99 percent making some dollar contributions to their rent. Because about 90 percent of voucher program funds are used to pay subsidies, decreasing the level of subsidy for which households are eligible (or, alternatively stated, increasing the amount households must contribute toward rent) necessarily will yield the greatest costs savings for the program. We estimated the effect, both in terms of cost savings and additional households that could be served with those savings if Congress chose to reinvest the savings in the program, of several options including requiring assisted households to pay higher minimum rents; 35 percent of their adjusted income in rent; 30 percent of their gross income in rent (with no adjustments); a percentage of the applicable fair market rent. Using HUD data, we determined that each of these options could reduce the federal cost burden—in some cases, quite considerably—or if Congress chose to reinvest cost savings in the program, allow housing agencies to serve more households without additional funding. For example, as shown in table 8, increasing minimum rents to $300 would yield the greatest cost savings on an annual basis—an estimated $1.8 billion—or allow housing agencies to serve the greatest number of additional households—an estimated 287,000. Requiring assisted households to pay 30 percent of their gross income in rent would yield the least savings for the voucher program and serve the fewest additional households. Further, HUD operates a number of other rental assistance programs where household subsidies are based on the same calculations as those for the voucher program. Implementation of these rent reform options in its other rental assistance programs has the potential to create additional cost savings. These reform options could be implemented individually and some could be implemented together, depending on the objective policymakers were trying to achieve—such as maximizing cost savings, minimizing the impact on assisted households, or promoting work and self sufficiency among families with children (that is, nonelderly, nondisabled households). To illustrate, one housing agency in the MTW program put in place a rent structure that gradually increases household rents—from 27 percent of gross income in years 1 and 2, to the greater of $100 or 29 percent of gross income in years 3 and 4, and to the greater or $200 or 31 percent of gross income in all subsequent years—to promote self- sufficiency among all assisted households. Under this approach, our analysis showed that households receive more subsidy in the first 2 years, but pay more rent over time than under the current rent structure. In addition to estimating the cost savings that could result from each of these rent reform options, we evaluated each option in terms of its effect on (1) changes in the rent paid by assisted households, (2) household attrition rates, (3) HUD’s goals of encouraging households to move to the neighborhoods of their choice (mobility) and discouraging households from choosing communities that have higher levels of poverty (deconcentration), (4) incentives to seek work, (5) program administration, and (6) housing agency and industry support. While each of these options has advantages over the current rent structure—they could reduce costs or create administrative efficiencies—each also involves trade-offs. Under each rent reform option, some households would have to pay more in rent than they currently pay. For example, as shown in table 9, if all households were required to pay at least $50 in rent per month, an estimated 36,000 households (2 percent) would experience an average increase of $31 in their monthly rent. HUD’s fiscal year 2013 budget request proposes increasing the minimum rent to $75 per month for all assisted household. Under this option, 207,000 households (11 percent) would experience an average increase of $27. Table 9 also shows options that change the formula for calculating the households’ payment toward rent. For example, setting the households rental payment to 30 percent of gross income (that is, without any deductions) would affect about 1,662,000 households (86 percent) and increase mean household rent by $27. Increasing minimum rents primarily would affect families with children that tend to report little or no income. Conversely, assisted elderly and disabled households almost always report income (most likely because they are on fixed incomes, like Social Security) and a large percentage of them already pay close to $200 in rent. On a programwide level, imposing minimum rents of $200 or less does not change the amount these households pay in rent, when considering all assisted households. Figure 4 shows the mean change in all households’ monthly rent resulting from each of these rent reform options. Increases in monthly rental payments for elderly and disabled households begin to increase more significantly with a $200 minimum rent and under each of the rent formula changes. As a result, higher minimum rents or increases to the percentage of their incomes paid in rent will yield the greatest cost savings. For the rent formula change to 35 percent of adjusted income, the mean change in monthly rent generally would be similar across each household type. Figure 5 shows the mean change in monthly rent only for those households whose payments toward rent have changed as a result of each reform option. Among these affected households, changes in rental payments would be similar across household types for some of the rent structure options. For example, if households were required to pay a $75 minimum rent, mean rental payments would increase by $30 for disabled households (on the high end) and $24 for elderly, disabled households (on the low end). However, if households were required to pay a $200 or higher minimum rent, families with children again would experience higher mean changes in rent than disabled and elderly households. Also as shown in figure 5, under the option where the rental payments are based on 35 percent of the fair market rent, some households will have to pay more in monthly rent, while others will pay less. Further, a higher proportion of affected households will see an increase in their rental payments. Specifically, of the approximately 1.9 million total households whose monthly rental payments would change under this option, about 61 percent (approximately 1.2 million households) would experience an increase in their monthly payments and about 39 percent (755,000 households) would experience a decrease. Requiring households’ rental payments to be based on a percentage of the applicable fair market rent rather than 30 percent of adjusted income primarily would affect households living in high-cost (mostly urban) areas and large families, as well as those at the lower end of the income scale. HUD’s fair market rents reflect market prices and unit sizes—thus, household rent shares will increase if they live in a more expensive fair market area or rent larger units in the same fair market rent area under a rent option based on percentage of fair market rents. Table 10 illustrates how fair market rents and household payments based on a percentage of the fair market rent can vary by location and unit size. In addition, under an option where households’ rental payments are based on a percentage of the fair market rent, lower-income households would pay a larger percentage of their income toward rent than higher- income households. And while many of the lowest-income households would experience rent increases ($116 per month, on average for families with children), many of the highest-income households would experience rent decreases ($97 per month). Under each of these rent reform options, a small number of households might lose their subsidies—that is, their subsidy payments would be reduced to zero because their new, higher rental payments would fully cover the gross rent. For example, under the option where households pay 35 percent of their adjusted income in rent, we estimated that approximately 1.8 percent of households would lose their subsidies. Further, other affected households might leave the program because they would have to pay more in rent and no longer choose to participate in the program. However, because the demand for rental assistance by low- income households generally exceeds the number of available vouchers, eligible household likely would replace the one that left because similar unassisted households have much higher rent burdens than assisted households. Consequently, these rent reform options likely would not result in a sharp decline in program participation rates. Rent structures that decrease the amount of subsidy households receive may discourage HUD’s deconcentration efforts, as well as household mobility. With less subsidy, households (especially those with lower incomes) may not have the means to move from neighborhoods of concentrated poverty to those with a diversity of people and opportunities. But HUD’s deconcentration goal presents its own trade-offs—chief among them that fewer households ultimately would be served, albeit with more generous subsidies. Among the rent reform structures we evaluated, all but one would decrease household subsidies. A rent structure under which households would pay 30 percent or less of the applicable fair market rent would increase subsidies for almost all households and thus could further HUD’s deconcentration and mobility goals. Two of the rent structures we evaluated—higher minimum rents and rents based on a percentage of the fair market rent—could create work incentives for households with little to no income. Under the current rent structure, and as previously discussed, a household with no income generally does not pay rent—HUD’s subsidy covers the gross rent. Consequently, some have argued that these households have little incentive to seek employment because, for every $1 they earn, their subsidies are reduced by 30 cents (for every $100 they earn on a monthly basis, they will pay $30 in rent). Rent structures that do not take into account household income may do more to encourage assisted households to find and retain employment.MTW program that have implemented these types of rent structures simultaneously have offered self-sufficiency training and services to assisted households. Additionally, rent structures that eliminate household income from the rent equation may allow Congress and HUD to more accurately forecast funding needs. As we previously discussed, market rents and tenant incomes are two of the primary drivers of program costs, and predicting changes in market rents and incomes when developing budget proposals for future years is difficult. These types of rent structures also would encourage assisted households to make choices about housing consumption similar to unassisted households. For example, households would not have an incentive to over-consume housing because their share of the rent would increases with the size of the unit they rented. See GAO, HUD Rental Assistance: Progress and Challenges in Measuring and Reducing Improper Rent Subsidies, GAO-05-224 (Washington, D.C.: Feb. 18, 2005). approaches for statutory, regulatory, and administrative streamlining and simplification of its policies for determining subsidies. Finally, nearly all of the housing agencies we contacted said that they supported some type of rent reform—among the most popular options were increasing minimum rents and increasing tenant rental payments to 35 percent of adjusted income. Some housing agencies have suggested that they have been successful in implementing rent reform under the MTW program with community support. Despite this, some industry groups have voiced concern about rent reform. For example, in commenting on a provision included in the draft Section 8 Savings Act of 2011 that would permit HUD to pursue a rent demonstration, the National Low Income Housing Coalition stated that the demonstration would put HUD-assisted households at risk of having significant rent burdens. The Coalition also said that any demonstration should include parameters that require HUD to monitor these burdens and stop or change the demonstration if it were found to harm assisted households. Based on our literature review and interviews with HUD and housing industry officials, consolidation of voucher program administration under fewer housing agencies (administrative consolidation) could yield a more efficient oversight and administrative structure for the voucher program and cost savings for HUD and housing agencies; however, current information on the magnitude of these savings was not available. HUD spends considerable resources in overseeing housing agencies. More than 2,400 local housing agencies administer the voucher program on HUD’s behalf. According to a 2008 HUD study, the department dedicated from more than half to two-thirds of its level of oversight to 10 percent of its units (generally those housing agencies that administer 400 or fewer vouchers), and an even lower level of risk in relation to the amount of subsidy funds they administered (about 5 percent of total program funds). According to agency officials, consolidating the administration of vouchers under fewer agencies would decrease HUD’s oversight responsibilities. According officials from HUD and some housing agencies with whom we spoke, administering the voucher program through small local housing agencies may be less cost effective, in part because of the differences in the economies of scale. For example, larger housing agencies can realize cost efficiencies in conducting large numbers of voucher unit inspections that smaller agencies cannot. Also, larger housing authorities collect sufficient fees to support fraud detection units to ensure that households report all of their income sources. Although there are no current data on the comparative costs of administering the voucher program though small and large housing agencies, the current administrative fee structure recognizes that economies of scale exist in larger housing agencies. As previously discussed, HUD pays housing agencies a higher rate for the first 600 vouchers a housing agency has under lease and a lower rate for the remaining units under lease. Congress passed this two-tiered fee structure based in part on a 1994 HUD study that found that flat fee rates were leading to administrative fee deficits in small housing agencies and large administrative fee reserves at larger housing agencies. HUD has acknowledged that oversight and administrative efficiencies could be realized. As previously discussed, in recent years, the department has advanced several proposals aimed at streamlining and simplifying administration of the voucher program. Several of these proposals have advocated administrative consolidation as a means of creating administrative efficiencies. For example, HUD’s 2011 version of the Transforming Rental Assistance initiative was intended to streamline and improve the delivery and oversight of rental assistance across all of the department’s rental assistance programs by means such as promoting consortiums, consolidation, and other locally designed structures for administrative functions. In addition, HUD recently initiated changes to its housing agency consortium rule. The revised rule would treat all housing agencies in a consortium as one entity—HUD’s current regulation requires that consortium members be treated separately for oversight, reporting, and other purposes. Some have argued that the current rule does not allow HUD or housing agencies to realize the full benefits of consolidation— less oversight (one versus multiple agencies) and shared and thus reduced administrative responsibilities—and therefore discourages the formation of consortiums. Since 1998, nine housing agencies that administer vouchers have formed four consortiums. We evaluated the administrative consolidation in terms of its effect on assisted households and selected voucher program goals. More specifically, we looked at implications for, or likelihood of achieving (1) HUD’s mobility and deconcentration goals, (2) program administration, and (3) housing agency and industry support. Like the rent reform options we evaluated using similar criteria, consolidation has advantages over the current administrative structure, but also involves some trade-offs. Consolidation might help HUD more readily achieve deconcentration goals. Although vouchers theoretically allow recipients to use them anywhere in the United States, the current system of program administration creates numerous hurdles for households to move out of high-poverty, central city jurisdictions in which they typically live. Most housing agencies originally were established to construct and manage public housing developments. As a result, program administration does not always align with housing markets. In urban areas within the same market, several housing agencies may operate voucher programs with different admissions criteria and subsidy levels. A paper by researchers at the Brookings Institution argued that this “fragmentation of local program administration undermines the potential of the program as a mechanism for deconcentrating urban poverty.” Extending the jurisdiction of housing agencies (through consolidation, for example) likely would give assisted households access to more housing options, particularly in surrounding suburbs. On the other hand, regionalized administration of the voucher program may make it harder for households to make or maintain contact with program administrators when necessary—for example, assisted households may not have access to transportation or may have to travel long distances to meet with housing agency officials. Several states offer examples of regional or statewide administration. Thirty-one states have programs in which one housing agency administers a voucher program throughout a state. These housing agencies administer from less than one percent to all of their respective state’s total voucher allocation. In addition, as part of our work, we visited a number of housing agencies in the Boston, Massachusetts, metropolitan area. As a result of litigation in the mid-1990s, local housing agencies in the state are permitted to lease vouchers throughout the state (that is, outside their original jurisdictions, which typically align with city limits). Although all of the housing agencies with which we spoke suggested that it was important that housing agencies maintain local control of their programs, each leased at least one voucher outside their original jurisdiction. In Brookline—a city with relatively high housing costs compared with the surrounding area and the nation—more than half of voucher holders rent apartments outside the city limits. Although consolidation will not alleviate housing agencies’ current administrative burden, it may begin to address some of the issues housing agencies and industry groups have raised about a particular policy—portability. Although portability is one of the hallmark objectives of the voucher program, almost all the housing agencies we contacted said that HUD’s portability polices should be revised or eliminated, noting that they are complicated and costly to administer. Under HUD’s portability rules, an assisted household may move to the jurisdiction of a different housing agency—the receiving agency either may bill the sending agency for assistance for the transferring household or absorb the household into its own program. According to the 2000 Brookings Institution report, because of the complexity of the portability process—for example, receiving agencies may calculate subsidy levels differently than sending agencies, or apply more rigorous screening criteria—many housing agencies do not fully explain portability to households and do not encourage them to consider moving. In addition, consolidated waiting lists and single points of contact for housing assistance within a single housing market, region, or state may make the process of applying for and obtaining rental assistance less confusing and more transparent for households seeking assistance. For example, a large number of housing agencies in Massachusetts participate in a consolidated waiting list—households seeking assistance in the state need only put their name on one list and receive communications from one agency. HUD officials said that the department has been considering taking steps to maintain the waiting lists of each housing agency in a centralized system. Finally, housing agencies we contacted were split on the idea of consolidation—about one quarter supported it as a way to cut costs and introduce administrative efficiencies in the voucher program, while almost half were against it. Some housing industry groups and an academic with which we spoke argued that consolidation would not save money—one noted that the administrative fees that small housing agencies receive are relatively insignificant in terms of total program dollars—and would sacrifice local discretion and control of voucher programs. Others noted that administrative costs savings could result from the consolidation and single-source management of waiting lists and elimination or substantial reformation of the portability process; however, no data currently are available to assess this point. Over the past decade, Congress has responded to the increasing cost of vouchers by changing the way the program is funded. Specifically, rather than providing funding based on the number of vouchers housing agencies are permitted to lease, Congress currently provides funding based on housing agencies’ prior-year subsidy expenses. Congress also has capped appropriations so that housing agencies do not always receive the amount of subsidy or administrative funding for which they are eligible based on the funding formulas Congress annually establishes. While this approach gives Congress some control over cost increases, it does not directly address the market and policy factors we identified as contributing to increases in program costs. Although policy makers can do little to alter or control market changes such as changes in rents and tenant incomes, our analysis suggests that savings could continue to be realized (or, in some cases, more households could be served without additional program funding if Congress chooses to reinvest the funds in the program) if HUD provided Congress better information on housing agencies’ subsidy reserves. Enhanced information would include the extent of housing agencies’ subsidy reserves, clear and consistent criteria for determining how much housing agencies would need to retain to help ensure effective program management, and how much could be rescinded in future appropriations. Without such information, HUD faces difficulties in effectively manage the funding Congress provides for the voucher program, including ensuring that funds disbursed to housing agencies are used to assist households rather than remaining unused in reserve accounts. In tandem with providing information about the use of program funds, HUD also has an opportunity to advance proposals that would help increase the efficiency of program administration. In particular, HUD now has or will have richer, relevant experience and data from which to draw. In addition to previous reforms HUD has proposed, examples from the MTW program and HUD’s study on administrative fees can offer options to Congress for streamlining and simplifying administrative activities and aligning the administrative fee structure with actual administrative expenses. For example, information and analyses from these sources could help identify all current administrative requirements, determine which of those actions are necessary and which could be eliminated or streamlined, and determine the cost of performing these activities—which could help reduce program costs in the future. Although Congress and HUD have taken several steps to control rising costs in the voucher program, we have identified a range of options that offer the additional promise of managing program costs or increasing efficiency in the long term. These options would also be applicable to HUD’s other rental assistance programs and would have the potential to generate even greater savings. Implementing rent reform and administrative consolidation would require policymakers to consider some potential trade-offs—in the balance are issues such as the rent burden of assisted households, concentration of poverty, and the extent of local control over voucher programs. Nevertheless, these options have certain advantages over the current program structure. For example, these options could save money or streamline program administration—both of which are important objectives in a time of fiscal constraint. Currently Congress is considering a variety of measures to address some of these issues. To help reduce voucher program costs or better ensure the efficient use of voucher program funds, we recommend that the HUD Secretary provide information to Congress on (1) housing agencies’ estimated amount of excess subsidy reserves and (2) its criteria for how it will redistribute excess reserves among housing agencies so that they can serve more households. In taking these steps, the Secretary should determine a level of subsidy reserves housing agencies should retain on an ongoing basis to effectively manage their voucher programs. Further, the Secretary should consider proposing to Congress options for streamlining and simplifying the administration of the voucher program and making corresponding changes to the administrative fee formula to reflect any new or revised administrative requirements. Such proposals should be informed by results of HUD’s ongoing administrative fee study and the experience of the MTW program. We provided a draft of this report to HUD for comment. In its written response, reproduced in appendix II, HUD neither agreed nor disagreed with our recommendations, but provided technical comments that we have incorporated where appropriate. While the response noted that the draft report provided an accurate assessment of the program and its current outcomes, HUD identified several points for clarification and emphasis, including: HUD commented that the stated purpose of our report of identifying options for increasing efficiencies and simplifying program administration was inconsistent with our recommendations for agency action because some of the options do not result in both efficiencies and simplification. We clarified, where appropriate, that the focus of our report was to identify reform options that could reduce costs or create efficiencies. HUD also commented that the draft report’s discussion of growth in HUD’s outlays could be misleading because this growth reflects only a change in HUD’s disbursement policy and does not relate at all to changes in program costs. Specifically, HUD stated that starting in 2006, the program was required to disburse all eligible funds, instead of the department’s maintaining those reserves. HUD did not provide any support that outlays reflect only a change in HUD’s disbursement policy and do not relate at all to changes in program costs. While we recognize that disbursement policies may affect outlays, changes in program size and other factors would also affect outlays. Further, although the draft provides information on the trends in actual HUD outlays, it focuses on housing agencies’ expenditures because they are a better measure of what housing agencies are paying in subsidies to assisted households with vouchers. Therefore, we made no changes in response to this comment. HUD also commented that the draft report did not address HUD’s ongoing efforts to limit the accumulation of subsidy reserves. We added additional language to the report on these efforts, such as the assistance HUD provides to housing agencies in ensuring that all available voucher funds are utilized. HUD noted that it currently provides quarterly reports to the Congressional Budget Office on subsidy reserve levels. However, these quarterly reports do not include information on the estimated amount of housing agencies’ subsidy reserves that exceed prudent levels, as we are recommending. By providing the estimated amount of excess subsidy reserves, Congress will be better positioned to make informed funding decisions, as we illustrated in our draft report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Housing and Urban Development and other interested committees. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or sciremj@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The objectives of our review were to (1) determine the factors that have affected costs in the Housing Choice Voucher (voucher) program from 2003 through 2011 and the actions Congress and the Housing and Urban Department (HUD) took to manage these costs and (2) identify additional steps HUD, housing agencies, or policy makers can take to limit cost growth in the voucher program and more effectively provide decent, safe, and affordable housing. To determine the factors that have affected costs in the voucher program from 2003 through 2011 and the actions Congress and HUD took to manage these costs, we reviewed and analyzed appropriations legislation, budget documents—including HUD budget proposals, Congressional Research Service reports, monthly statements from the Department of the Treasury, and the Office of Management and Budget SF-133 reports on budget execution and budget resources. We also reviewed HUD’s annual guidance on the allocation of the program’s appropriation to housing agencies. We used these sources to determine the annual appropriations and outlays over the period. The starting year for our analysis reflects the year when Congress began changing the voucher program’s funding formula. We analyzed program data that HUD prepared using information derived from multiple HUD systems including the Central Accounting and Program System (HUDCAPS) and Voucher Management System (VMS) to determine how much housing agencies’ expenditures changed from 2003 through 2010. Specifically, we assessed the extent to which certain factors, such as subsidy paid to a landlord, program size (that is, the number of assisted households), and administrative expenses, contributed to the change in program expenditures over this period. We identified these factors by reviewing GAO, HUD, and stakeholder studies. We also reviewed prior work by GAO and others to describe what is known about the cost-effectiveness and characteristics of vouchers relative to other forms of rental housing assistance. To identify additional steps HUD, housing agencies, or policy makers can take to limit cost growth in the voucher program and more effectively provide decent, safe, and affordable housing, we identified and reviewed relevant legislation, draft legislation, and studies. We analyzed HUD’s VMS data on the Net Restricted Assets (NRA) balances (or subsidy reserves) of housing agencies as of September 30, 2011, to determine the extent of housing agencies’ “excess” subsidy reserves. To derive our estimates of the potential “excess” balances, we used HUD’s 8.5 percent (about a month) threshold to estimate the excess NRA balance. Also, we analyzed HUD data to determine the number of housing agencies and amount of funding that Congress offset in fiscal years 2008 and 2009 and the additional funding Congress appropriated for and HUD provided to certain housing agencies in 2009. Further, we visited nine housing agencies in Massachusetts. We selected these housing agencies based on Massachusetts’ use of both local and regional housing agencies to provide voucher assistance and the housing agencies’ proximity to one another. In addition, we interviewed 31 of the 35 housing agencies participating in the Moving to Work (MTW) demonstration program to identify the activities the agencies had implemented in their voucher programs to reduce program costs and introduce efficiencies in the program. For example, as part of these interviews, we identified alternate rent structures these agencies had implemented or proposed. We also evaluated the cost and policy implications of three types of programmatic reforms to the voucher program: increasing minimum rents, changing the percent of income tenants pay toward rent, and requiring tenants to pay a percentage of fair market rent. In identifying and assessing these programmatic reforms, we reviewed proposals included in draft legislation and HUD, Congressional Budget Office, and housing industry group reports. We also considered reforms certain agencies have implemented. To estimate the effects of these alternative approaches to calculating tenant payments on the subsidy levels that result, we analyzed a December 2010 extract of tenant records from HUD’s Public and Indian Housing Information Center (PIC). These records contain information about participating households, as of December 2010, including information on gross and adjusted income levels, housing unit size and rent, tenant contributions and housing assistance payments, as well as information on age, sex, and disability status of each household member. To focus on the core of the assisted household population, we examined only those households with five or fewer members, and living in units with one, two or three bedrooms. We determined the elderly and disability status of each household. Specifically, we defined a household as an elderly household if either of the first two household members (the head of household and possibly a spouse or co-head) were age 62 or over, and we placed a household in disability status if any of the five members were identified as having a disability. For the identified subsidy alternatives, we calculated an alternative tenant contribution using information on income and applicable fair market rent in the PIC file as appropriate, and calculated the resulting assistance payment. (The assistance payment is the difference between the lesser of the payment standard and gross rent, and the tenant payment, subject to any existing minimum tenant payments.) We did not consider the possible effects of any change in household behavior, either in terms of continued participation in the voucher program or in choice of housing unit or rent level that could be induced by changes in tenant contributions. In conducting our work, we assessed the reliability of datasets provided by HUD, including data files derived from HUDCAPS, VMS, and PIC. Specifically, we performed basic electronic testing of relevant data elements, such as housing assistance payment amounts, total tenant payment, and unit months leased. We reviewed HUD’s data dictionaries, instructions, and other relevant documentations. We also interviewed HUD officials knowledgeable about the data to obtain clarifications about key variables and calculation rules. Where possible, we compared our results with other sources to ensure the reasonableness of the information. We determined that the data were sufficiently reliable for the purpose of this report. Finally, for all of our objectives, we interviewed HUD officials and consulted with one academic and officials from various housing groups including the Center on Budget and Policy Priorities, Council of Large Public Housing Authorities, National Low-Income Housing Coalition, National Association of Housing Redevelopment Officials, Public Housing Authorities Directors Association, Quadel Consulting, and the Urban Institute. Further, we contacted 53 housing agencies that administer the voucher program. In selecting these housing agencies, we considered the number of authorized vouchers, location (that is, HUD-defined regions), and leasing and spending rates for the voucher program as of March 2011. We conducted this performance audit from February 2011 through March 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Daniel Garcia-Diaz, Acting Director; Stephen Brown, William Chatlos, Karen Jarzynka-Hernandez, Cory Marzullo, John McGrail, Josephine Perez, and Barbara Roesmann made key contributions to this report.
The Department of Housing and Urban Development’s (HUD) Housing Choice Voucher (voucher) program subsidizes private-market rents for approximately 2 million low-income households. HUD pays a subsidy that generally is equal to the difference between the unit’s rent and 30 percent of the household’s income. HUD also pays an administrative fee, based on a formula, to more than 2,400 local housing agencies to manage the program. Over time, program expenditures steadily have risen, causing some to questionhow well HUD managed costs and used program resources. This report (1) discusses the key drivers of cost growth in the voucher program and the actions taken to control this growth and (2) analyzes various options to cut costs or create efficiencies. For this report, GAO analyzed HUD data; reviewed budget documents, program laws and regulations, guidance, academic and industry studies, and GAO reports; and interviewed officials from HUD, industry groups, and 93 housing agencies. Several factors—including rising rents, declining household incomes, and decisions to expand the number of assisted households—were key drivers of the approximately 29 percent increase (before inflation) in housing agencies’ expenditures for the voucher program between 2003 and 2010. Congress and HUD have taken steps to limit cost increases while maintaining assistance for existing program participants. For example, Congress moved away from providing funding to housing agencies based on the number of voucher-assisted households they were authorized to subsidize and instead provided funding based on the generally lower number of voucher-assisted households housing agencies actually subsidized in the prior year. Further, HUD has proposed administrative relief and program flexibility for housing agencies, including streamlining program requirements and reducing subsidies paid. GAO identified several additional options that, if implemented effectively, could substantially reduce the need for new appropriations, cut costs (expenditures), or increase the number of households assisted. Reduce housing agencies subsidy reserves. Housing agencies have accumulated approximately $1.8 billion in subsidy reserves (unspent funds). They can hold the funds in reserve or spend them on authorized program expenses in future years. Over time, large sums can accumulate. Although HUD has sought the authority to offset (reduce) its future budget requests by the amount of “ excess” subsidy reserves, it has not provided Congress with complete or consistent information on how much of these reserve funds housing agencies should retain for contingencies. GAO has highlighted the importance of providing clear and consistent information on housing agencies’ reserves to Congress so it can make informed funding decisions. Implement administrative reform. HUD officials have noted that certain requirements for administering the voucher program are burdensome and costly and could be streamlined. In addition, the formula HUD uses to pay administrative fees to housing agencies is not tied to current administrative costs or requirements. HUD has an administrative fee study underway, which intends to identify specific reforms to ease administrative burden, increase efficiencies, and suggest ways to align the administrative fee formula with the functions housing agencies perform. Without this information, Congress may not have the information necessary to make fully informed policy and funding decisions related to the voucher program. Implement rent reform and consolidate voucher administration. Rent reform (for example, reducing subsidies by requiring households to pay more toward rent) and consolidation of program administration under fewer housing agencies could yield substantial cost savings—approaching $2 billion—or allow housing agencies to serve additional households, provided annual savings were reinvested in the program. However, while these options may have some advantages over the current program structure, they would require policymakers to consider some potential trade-offs, including increased rent burdens for low-income households, increased concentration of assisted households in high poverty areas, and more limited local control over voucher programs. GAO identifies options for increasing efficiencies and recommends that HUD (1) determine what level of reserve funding housing agencies should maintain and reduce future budget requests by the amount of excess reserves and (2) consider proposing options for simplifying program administration and changes to the administrative fee formula. HUD did not agree or disagree with the recommendations. While it noted that the draft provided an accurate assessment, it offered some clarifications and responses.
The UI program was established by Title III of the Social Security Act in 1935 and is a key component in ensuring the financial security of America’s workforce. This complex program, which is administered jointly by the U.S. Department of Labor and the states, provides temporary cash benefits to workers who lose their jobs through no fault of their own. The program also serves to stabilize the economy in times of economic recession. Labor is responsible for overseeing the UI program to ensure that the states operate an effective and efficient Unemployment Insurance Program. Labor is also responsible for monitoring state operations and procedures, providing technical assistance and training, as well as analyzing UI program data to diagnose potential problems. To oversee the program, Labor’s Employment and Training Administration maintains 10 offices in 6 geographic regions that are responsible for working with states in a specific geographic area (see fig.1). The regional offices are the states’ main point of contact with Labor and serve as a vital link between headquarters and the states for providing technical assistance and clarifying program policies, objectives, and priorities. Moreover, the regional offices have primary responsibility for overseeing the fiscal and management integrity of the UI program. Although Labor provides oversight and guidance to ensure that each state operates its program in a manner that is consistent with federal guidelines, the federal-state structure of UI places primary responsibility for administering the program on the states. The states also have wide latitude to administer their UI programs in a manner that best suits their needs within the guidelines established by federal law. For example, to enhance the efficiency and cost-effectiveness of their UI systems, many states have established centralized service centers that allow claimants to apply for benefits by telephone, fax, or the Internet. The UI program is funded through federal and state taxes levied on employers. The states collect the portion of the tax needed to pay unemployment insurance benefits, whereas state and federal administrative costs and other related federal costs of the UI program are financed through the federal tax. Labor holds these funds in trust on behalf of the states in the Unemployment Trust Fund of the U.S. Treasury. To obtain annual UI administrative funding from Labor, states submit an annual request for funding as part of their State Quality Service Plan (SQSP). Labor reviews each state’s plan and subsequently determines if any adjustment in funding is required. The regional offices may also negotiate changes and revisions to the states’ funding requests before the final allocation is approved. In fiscal year 2001, Labor provided about $2.3 billion to states to administer their programs. To be eligible for UI benefits in most states, unemployed workers must fulfill five general conditions within overall federal guidelines. They must: have worked for a specified amount of time in a job that is covered by the unemployment insurance program; have left their prior jobs involuntarily (such as by employer layoff) or have quit their jobs for “good cause”; be currently “able and available” for work, and, in most states, actively enroll in employment services or job training programs (in some be legally eligible to work—for example, noncitizens must be lawfully admitted to work in the United States, or lawfully present for other reasons. Each state’s laws provide specific requirements for claimants to meet these general conditions, and each state determines individual eligibility, the amount and duration of benefits, and disqualification provisions. Because Labor provides states with the flexibility to design their own UI program, the eligibility policies and laws governing the administration of the UI program vary from one state to another. In general, however, claimants apply for UI benefits over the telephone, via computer using the Internet, or in person at a local office. State claims representatives are responsible for determining each claimant’s eligibility for UI benefits by gathering and (when possible) verifying important information, such as their identity, employment history, why they no longer are working, and other sources of income they may have. Once the claim has been submitted for processing, the state sends forms to the claimant’s employer(s) requesting them to verify the claimant’s wages and the reason they are no longer working. If the individual’s claim for UI is approved, the state then determines the amount of UI benefits, depending on the individual’s earnings during the prior year and other factors. UI benefits may be mailed to a claimant’s home or post office box, or sent electronically to a bank account. In general, most states are expected to provide the first benefits to the claimant within 21 days of the date the state determined that the claimant was entitled to benefits. Labor funds two principal kinds of activities for detecting and measuring UI overpayments at the state level—Benefit Payment Control and Benefit Accuracy Measurement. Each state is required to operate a benefit payment control division that is responsible for detecting and recovering overpayments. This process also involves reporting the reason for the overpayment—such as wages that the claimant failed to report. Each state is required to report overpayments along with other data to Labor on a quarterly basis. By contrast, Labor’s benefit accuracy measurement data are an estimate of the total overpayments in the UI program—in each state and the nation as a whole—based on a statistically valid examination of a sample of paid and denied claims. Benefit accuracy measurement is one of the main quality assurance systems that Labor uses to measure payment accuracy in the program. Of the $30 billion in UI benefits paid nationwide in 2001, Labor estimates that about $2.4 billion in UI overpayments occurred. About one-quarter of these overpayments ($577 million) were identified as fraud, according to its quality assurance data. Overpayments may occur because individuals work while receiving benefits, fail to register for employment services, fail to look for a new job, or misrepresent their identity. Other sources of overpayments include agency errors and inaccurate or untimely information provided by employers. Of the $2.4 billion in projected overpayments, Labor estimates that about $1.3 billion could have potentially been detected and/or recovered in 2001 given existing state procedures and policies. In contrast, the states reported that $650 million in overpayments were made in 2001, of which $370 million was actually recovered. Overall, Labor’s overpayment estimate is about three times higher than that reported by the states. The difference in the overpayment figures produced by the two systems can be attributed to the fact that Labor’s quality assurance estimate is based on a more comprehensive examination of individual UI claims than the states’ benefit payment control activities can generally produce. Our analysis suggests that Labor’s quality assurance system estimate is a more complete assessment of the true level of overpayments in the UI program, partly because the system documents overpayments that often cannot be detected in many states using their existing benefit payment control procedures. Over the past 10 years, the annual overpayment rate estimated by Labor’s quality assurance system has remained fairly constant as a percentage of total benefits paid—ranging from a low of 8.0 percent in 2001 to 9.2 percent in 1999 and averaging about 8.5 percent during that period. Overpayments averaged about $1.8 billion per year and reaching a high of about $2.4 billion in 2001. (See fig. 2.) The slight increase in overpayments estimated by the quality assurance system in 2001 is likely related to the overall increase in total UI benefits paid that year. The overpayments estimated by Labor’s quality assurance data fall into a number of categories. Some overpayments result from errors in claimants’ reporting or the state agency’s recording of important eligibility information, such as wages or other sources of income that a claimant obtained while receiving UI benefits (“benefit year earnings” violations). (See table 1.) Overpayments also occur because claimants are not able and/or available to work, fail to register for employment services as required by their state, or fail to look for a new job as required (“eligibility” violations). (See app. I.) The quality assurance data also classify overpayments as being “fraud” or “nonfraud”. Fraud can occur when claimants intentionally misrepresent eligibility information, employers file fraudulent claims, or state UI program personnel abuse sensitive information such as social security numbers for personal gain. Of the total overpayments estimated by Labor in 2001, about $577 million (24 percent) were attributed to fraud. Although this estimate takes into account each state’s individual laws, we found that the states differ substantially in how they define fraud. For example, some states may include overpayments resulting from unreported earnings as fraud, while other states do not. Thus, state-to-state comparisons of the level of fraud in the UI program and the activities that constitute fraud are difficult to make. Overall, the largest overpayment categories in 2001 were attributed to eligibility issues (35 percent), benefit year earnings (31 percent), and separation issues (21 percent). Federal and state officials told us that some categories of overpayments are more difficult to detect than others. For example, some officials told us that it can be difficult for states to accurately determine, in a cost-effective manner, if a claimant is actively searching for a job (an eligibility requirement in some states). In particular, there is not a readily available source of information that states can access for information on whether each claimant is actively looking for employment. Work search requirements vary considerably from state to state, and can have a substantial impact on state payment accuracy rates. Moreover, states generally lack sufficient resources to permit their benefit payment control personnel to conduct in-depth examinations of each claimant’s activities to determine if they are eligible. States that have only a limited work search requirement (or no requirement at all) may not establish overpayments for UI claimants who fail to look for a new job. In contrast, states with rigorous work search policies are more likely to establish overpayments for claimants who do not meet this requirement. Although some categories of overpayments are more difficult than others to detect or recover, Labor’s analysis suggests that the states could have potentially detected and recovered about $1.3 billion (54 percent) of the $2.4 billion in estimated overpayments in 2001. This estimate is based on Labor’s analysis of the types of overpayment errors the states’ benefit payment control operations were most likely to be able to identify and recover given their current policies and procedures. (See table 2.) In particular, states’ benefit payment control activities tend to focus on detecting overpayments that result from unreported income (benefit year earnings or base wage period violations) and payments to individuals who are not entitled to UI benefits due to the circumstances under which they became unemployed (separation issues). For example, benefit payment control staff may use the “Wage/Benefit Crossmatch” to identify and examine claimants who received UI benefits during a week in which they appear to have earned wages. Labor’s analysis also suggests that other types of overpayments are likely to be detected by most states given their current policies and procedures. These include unreported or underreported income from social security programs, illegal aliens claiming benefits, and unreported vacation or severance pay. Furthermore, based on Labor’s analysis, we believe that a substantial proportion of the overpayments detected by the states could be recovered using commonly available procedures such as offsetting claimants’ current and future benefits and intercepting other sources of income such as state tax refunds. Labor determined that the remaining $1.1 billion in estimated overpayments could probably not be detected or recovered by the states due to limitations in their existing policies and procedures. For example, overpayments caused by state agency errors are generally not pursued for recovery. In contrast to Labor’s estimate, the states reported about $653 million in overpayments in 2001—roughly half the total that Labor’s quality assurance system identified. Moreover, at the time of our review, the states reported that they had recovered about $370 million of this amount. The quality assurance and the benefit payment control systems differ in the scope and the methods of the activities they use to identify overpayments. On the basis of our analysis as well as analysis performed by Labor’s Division of Performance Management, we believe that Labor’s quality assurance system data represent a more complete assessment of the true level of UI overpayments than the benefit payment control figure reported by the states. In particular, the quality assurance system is able to estimate all the potential overpayments that have occurred in each state’s UI program because it is based on a statistically valid sample of UI claims from each state. Moreover, quality assurance investigators are able to conduct a more detailed, comprehensive analysis of each case they review than is typically possible for most states’ benefit payment control operations. For example, the investigator is generally able to identify many types of overpayments because they can spend more time verifying the accuracy of the information provided to the state by personally contacting employers, claimants, and third parties. In addition, investigators typically spend between 5 and 8 hours examining a single case, which allows them to perform a relatively in-depth review of a claimant’s eligibility. By contrast, the states’ benefit payment control activities are often affected by operational and policy factors that limit their ability to detect and/or recover overpayments. These factors include limited staffing and funding, cost-benefit considerations (e.g., the costs associated with recovering an overpayment may be greater than the overpayment amount), and a lack of access to timely data sources. Moreover, benefit payment control personnel are required to quickly examine thousands of cases to identify overpayments, thus potentially limiting their ability to thoroughly review cases for payment accuracy. We identified various management and operational practices at both the state and federal level that contribute to UI overpayments. At the state level, we found that a number of states place primary emphasis on quickly processing and paying UI claims and may not take the necessary steps to adequately verify claimants’ initial and continuing eligibility for benefits. In particular, five of the six states we visited were not fully staffing their benefit payment control operations and had moved staff to claims processing activities. In addition, while some of the states we visited use automated data sources to determine if claimants are working or obtaining other benefits while receiving UI, others rely heavily on self-reported information from claimants to make payment decisions. States also tend to establish UI program policies and priorities in response to direction from the Department of Labor, which in some instances may contribute to overpayments. For example, the performance measures that Labor uses to gauge states’ operations tend to emphasize payment timeliness more heavily than payment accuracy. In addition, Labor has been reluctant to link the states’ performance on payment accuracy to the annual administrative funding process as a way of holding states accountable for performance. Labor has taken some actions to improve UI program integrity, such as working to obtain additional automated data sources that could help states make more accurate eligibility decisions and developing a payment accuracy performance measure. However, Labor and the states have not placed sufficient emphasis on balancing the often competing priorities of quickly processing and paying UI claims, with the need to ensure that only eligible individuals receive benefits. The emphasis that an agency places on critical program activities can be measured, in part, by the level of staff and other resources devoted to those activities. Most of the states we visited placed primary emphasis on quickly processing and paying UI claims, with less attention given to program integrity operations. In particular, we found that program managers commonly moved staff assigned to program integrity activities (such as benefit payment control) to claims processing positions in response to increases in the number of UI claims being filed. For example, one state was using only 4 of the 16 positions (25 percent) it was allotted by Labor for benefit payment control. Only one of the six states we visited was fully staffing its benefit payment control operations. The remaining states had transferred staff into other positions, including claims processing. Another state stopped drawing its quality assurance sample for a period of time and moved staff responsible for these operations into claims processing positions when unemployment claims increased during the third quarter of 2001. Many federal and state officials we interviewed told us that states move staff into claims processing roles from other positions because they lack adequate funding to properly administer all the necessary activities of their UI programs. In this regard, some state officials told us that they anticipated additional funding from the federal government which they could use to increase the resources and staff dedicated to benefit payment control and other program integrity operations. However, a number of officials told us that historically the UI program’s primary objective has been to pay claimants in the most expeditious manner possible, and that this would continue to be a guiding principle of the program. While states differed in the level of staff and resources devoted to program integrity activities, we also found variation in the processes and tools they used to verify information that could affect a claimant’s eligibility for UI benefits. The most important information requiring verification generally includes an individual’s wages and employment status, receipt of other federal or state benefits, identity, and citizenship status. All of the states we visited conduct basic computer matches that help them to detect potential UI overpayments due to unreported earnings. For example, each state regularly conducts a Wage/Benefit Crossmatch that compares the database of UI claimants with the state’s database of individuals’ wages to identify UI recipients who may have unreported income in the same state in which they are receiving UI benefits. Labor and the states generally view this match as an effective tool for identifying claimants who may have unreported wages within the state. However, because state wage data are only available quarterly, the crossmatch relies on information that may be several months old by the time the match is conducted. This delay allows some overpayments to remain undetected for a long period of time. Officials at Labor and in some states emphasized that overpayments are more likely to be recovered if they can be detected quickly. In general, the states tend to recover a substantial proportion of the overpayments they detect by offsetting a claimant’s current and future UI benefits. Because UI benefits tend to be paid out over a relatively short period of time—about 14 weeks on average—overpayment detection and recovery activities may begin long after individuals leave the UI rolls. This inability to obtain timely eligibility information places the program at substantial risk for overpayments that may never be recovered. More timely sources of data than the Wage/Benefit Crossmatch exist to verify a claimant’s employment status, such as the State Directory of New Hires (referred to as the “state new hires database”). The states’ new hires databases can provide information on individuals’ current employment status, and have been found to be effective in preventing or reducing the amount of UI overpayments. However, we found that this data source is not routinely used in all states. For example, two of the states we visited do not currently use their new hires database to verify claimants’ earnings or employment status. Officials in one state told us that they currently lacked access to the state’s new hires database (but are seeking access), while those in another state questioned the cost- effectiveness of its use. However, other states that use this data source have reported that it is helpful in detecting overpayments more quickly than the Wage/Benefit crossmatch. For example, one state reported that because the new hires data detects overpayments earlier than other detection methods, the size of its average overpayment at the time of detection has been reduced from about $2,800 to roughly $750. Moreover, the same state reported that it detected about 6,700 overpayments totaling over $4 million using its new hire database between July 2000 and December 2001. Overall, use of the new hire database in this state accounted for more than 35 percent of all instances of overpayments detected during that period. Another state reported increased overpayment collections of about $19 million over 4 years, in part due to earlier detections from the new hires database. Labor’s OIG has identified the new hire database as a potentially useful tool for detecting overpayments resulting from unreported income, which makes up a substantial portion of the total overpayments estimated by the quality assurance system each year. Although Labor has encouraged each state to use its own new hires database for purposes of administering their UI program, we found that nationally a number of states still do not use this data source. While the states’ directory of new hires data are useful for verifying claimants’ employment status, a main limitation is that they only identify this information for claimants within a given state. To detect unreported or underreported wages in other states, some states also use various types of interstate matches that are facilitated by Labor. One match (called the “Interstate Crossmatch”) is conducted quarterly by most states for all UI claims and is designed to detect claimants who may have wages in another state. However, this match typically relies on wage data that are typically about 4 to 6 months old and, therefore, is of limited use in determining claimants’ initial eligibility for benefits. The states may also use another type of match called the “Interstate Inquiry.” This system allows a UI claims representative to check a claimant’s UI and employment status in other states. However, officials at Labor and the states we visited told us that this system is generally only used if the claims representative is suspicious about the validity of the claim. Moreover, the system can only be used to check individual claimants and is not designed to verify the status of large numbers of claimants simultaneously. Finally, two of the states we visited periodically conduct their own matches with bordering states. However, this method generally requires individual states to develop formal data sharing agreements with one another, which can be time-consuming and cumbersome. To enhance the ability of states to verify the status of claimants who could be working or receiving UI benefits in other states, many of the officials we spoke with advocated giving states access to the Office of Child Support Enforcement’s National Directory of New Hires (NDNH). The NDNH is a comprehensive source of unemployment insurance, wage, and new hires data for the whole nation. However, current law limits access to the NDNH and does not permit individual states to obtain data from it for purposes of verifying claimants’ eligibility for UI. Moreover, our prior work examining the NDNH has revealed concern among some federal officials that wider access to the database could jeopardize the security and confidentiality of the information it contains. One possible alternative to the NDNH suggested by federal and state officials for tracking interstate wages and UI benefit receipt is the Department of Labor’s Wage Record Interchange System (WRIS). This system, which was developed in response to the Workforce Investment Act (WIA) of 1998, is a “data clearinghouse” that makes UI wage records available to states seeking employment and wage information on individuals in other states. Certain federal officials and others familiar with WRIS told us that with some modification—such as incorporating the more timely new hires data from the states—WRIS could be a logical alternative to the NDNH because the computer network for sharing data among the states already exists. However, one official familiar with the system noted that while it contains the necessary data to show whether a claimant is earning wages in another participating state, it currently lacks important pieces of information (such as states’ new hires data) that would make it most useful as an interstate verification tool. Moreover, in a recent report, we noted that some states have been reluctant to become involved with WRIS, partly because of concerns about the cost of administering the system. Furthermore, we noted that if not all states participate, the value of WRIS will be diminished—even for participating states—because no data will be available from nonparticipating states’ UI wage records. This is an area where Labor could potentially play a larger role. In particular, Labor could explore options for enhancing WRIS as an overpayment detection tool and facilitating states’ participation in any modified system. Although modifying existing systems and obtaining access to new, more timely data sources may entail additional costs for Labor and the states, our review and prior work in other programs suggests that the potential savings in program funds could outweigh these costs. Claimants’ eligibility for UI benefits may be affected if they are receiving benefits from other state or federal programs. For example, claimants in some states are ineligible for UI benefits, or they may receive reduced benefits if they are receiving workers’ compensation. Overpayments can occur if claimants do not accurately report the existence or amount of such benefits when they apply for UI, or if the state employment security agency fails to verify the information in a timely manner. Only two of the six states we visited verify claimants’ receipt of workers’ compensation using independent sources of information. Moreover, at least one of these states only checks for receipt of workers’ compensation if the claimant self-reports that he or she is currently receiving such benefits. Similarly, receipt of some federal benefits such as cash payments from Social Security programs may affect a UI claimant’s eligibility for or amount of benefits. For example, one state’s policy manual requires claims representatives to ask claimants if they are currently receiving Social Security Disability Insurance (DI) or Old Age and Survivors Insurance (OASI) benefits, which could reduce or eliminate the amount of UI benefits they are eligible to receive. If a claimant states that he or she is not receiving DI benefits, then no further actions are taken to independently verify this information. Labor’s quality assurance data estimates that in 2001, about $35 million in UI overpayments were due to unreported social security benefits, such as DI. To ensure that UI benefits are paid only to individuals who are eligible to receive them, it is important that states verify claimants’ identity and whether they are legal residents. However, states may be vulnerable to fraud and overpayments because they rely heavily on claimants to self- report important identity information such as their social security number (SSN) or are unable to verify such information in a timely manner. Prior investigations by Labor’s OIG demonstrate that the failure or inability of state employment security agencies to verify claimants’ identity have likely contributed to millions of dollars in UI overpayments stemming from fraud. One audit conducted in four states (Florida, Georgia, North Carolina, and Texas) revealed that almost 3,000 UI claims totaling about $3.2 million were paid to individuals using SSNs that did not exist or belonged to deceased individuals. Furthermore, the OIG concluded that illegal aliens filed a substantial proportion of these claims. We found that vulnerabilities remain with regard to verifying claimants’ identity and citizenship status. For example, none of the six states we visited have access to the Social Security Administration’s (SSA) State Online Query (SOLQ) system, which can be used to verify the identity of claimants applying for UI by matching their name, date of birth, and SSN in real time. At the time of our review, only two states (Utah and Wisconsin) had access to this system because they were participating in a pilot project with SSA. The states we visited generally use a batch file method in which large numbers of SSNs are periodically sent to SSA for verification. This process tends to be less timely than online access for verifying claimants’ initial eligibility for benefits. However, one state we visited reported that it does not perform any verification of the SSNs that UI claimants submit because a prior system it used for verifying SSNs identified only a small number of potential violations. This state decided that its resources could be better used to support other key work priorities, including claims processing. In addition, all six states we visited rely mainly on claimants to accurately self-report their citizenship status when they first apply for UI benefits. State officials told us that they do not verify this information with the Immigration and Naturalization Service if the claimant states that he or she is a citizen. The results of our review suggest that the inability of some states to accurately verify whether claimants’ are lawfully present in U.S., and thus their eligibility for UI, has contributed to program overpayments. Labor estimates that about $30 million of the $1.3 billion in overpayments that were deemed to be the most readily detected and recovered by the states in 2001 were due to illegal alien violations. (See table 2.) Even if individuals do not misrepresent their identity or citizenship status to illegally obtain UI benefits, the potential for fraud and abuse may still exist. For example, one state we visited revealed that it, along with a bordering state, identified nine SSNs that are currently being illegally used by over 700 individuals as proof of eligibility for employment. Upon further investigation, we determined that these SSNs were being used in at least 29 states, and seven of the SSNs belonged to deceased individuals. Although we did not find any instances in which UI benefits were obtained by those individuals earning wages under these numbers, both state and federal officials agreed that the potential for these individuals to fraudulently apply for and receive UI benefits in the future was possible. Given the potential for fraudulent receipt of UI or other benefits, and the apparently widespread misuse of social security numbers, our Office of Special Investigations has initiated an investigation into this matter in coordination with the Social Security Administration and the Immigration and Naturalization Service. To varying degrees, officials from all of the six states we visited told us that employers or their agents do not always comply in a timely manner with state requests for information needed to determine a claimant’s eligibility for UI benefits. For example, one state UI Director reported that about 75 percent of employers fail to respond to requests for wage information in a timely manner. In addition, an audit conducted between 1996 and 1998 by Labor’s OIG revealed that 22 out of 53 states experienced a non-response rate of 25 percent or higher for wage requests sent to employers. A more in-depth review of seven states in this audit also showed that $17 million in overpayments occurred in four of the states because employers did not respond to the states’ request for wage information. We discussed these issues with an official from a national employer representative organization. After consulting a broad cross- section of employers that are members of the organization, the official told us that some employers may resist requests to fill out paperwork from states because they view the process as cumbersome and time-consuming. In addition, some employers apparently indicated that they do not receive feedback on the results of the information they provided to the states and, therefore, cannot see the benefit of complying with the requests. It is also difficult for some employers to see how UI overpayments and fraud may affect them. In particular, because employers are unlikely to experience an immediate increase in the UI taxes they pay to the state as a direct result of overpayments, they do not see the benefit in complying with state requests for wage data in a timely manner. Although Labor has taken some limited actions to address this issue, our work to date shows that failure of employers to respond to requests for information in a timely manner is still a problem. While most states recover a large proportion of their overpayments by offsetting claimants’ current or future benefits, some of the states we visited have additional overpayment recovery tools for individuals who are no longer receiving UI. These tools include state tax refund offset, wage garnishment, and use of private collection agencies. Some of these procedures, such as the state tax refund offset, are viewed as particularly effective. For example, one state reported overpayment collections of about $11 million annually between 1998 and 2000 resulting from this process. Other states have increased overpayment collections by allowing more aggressive criminal penalties for individuals who are suspected of UI fraud. For example, one state prosecutes UI fraud cases that exceed a minimum threshold as felonies instead of misdemeanors. Officials in this state reported that by developing agreements with local district attorneys, the state OIG has been able to use the threat of imprisonment to encourage claimants’ suspected of fraud to make restitution for UI overpayments. According to state officials, this initiative has resulted in $37 million in additional overpayment collections in calendar years 2000 and 2001. However, other states we visited lacked many of these tools. For example, one state relied heavily on offsets against current UI claims to recover overpayments because its laws and policies did not permit the use of many of the tools that other states have found to be effective for collecting overpayments from individuals who have left the UI rolls. In general, Labor’s approach to managing the UI program has emphasized quickly processing and paying UI claims, with only limited attention to overpayment prevention, detection, and collection. This approach is most evident in the priorities that are emphasized in Labor’s recent annual performance plans, the UI program’s performance measurement system, and the limited use of quality assurance data to correct vulnerabilities in states’ UI operations. For example, Labor’s recent annual performance plans required under the Government Performance and Results Act of 1993 have not included strategies or goals to improve payment accuracy in state UI programs. In addition, we found that Labor’s system for measuring and improving operational performance in the UI program is primarily geared to assess the timeliness of various state operations.Most of the first 12 performance measures (called Tier I) assess whether states meet specified timeframes for certain activities, such as the percentage of first payments made to claimants within 14 to 35 days and the percentage of claims appeals decided within 45 days. However, none of the Tier I measures gauge the accuracy of UI payments. Labor also gives Tier I measures more weight than the remaining measures (called Tier II ), which assess other aspects of state performance, including fraud and nonfraud collections. Labor has developed national criteria specifying the minimum acceptable level of performance for most Tier I measures. States that fail to meet the minimum established criteria are required to take steps to improve their performance. Generally, states are required to submit a “Corrective Action Plan” to Labor as part of the annual SQSP.Moreover, Labor has stated that it could withhold the administrative funding of states that continue to perform below specified Tier I criteria over an extended period of time, although this rarely occurs. By contrast, the Tier II measures do not have national minimum performance criteria, and are generally not enforced as strictly by Labor. For example, a state that fails to meet Tier II measures may be encouraged to submit a “Continuous Improvement Plan” discussing how it will address performance problems. However, Labor generally does not require a state to submit such a plan and does not withhold administrative funds as an incentive to ensure state compliance with Tier II measures. Officials from most of the states we visited also told us that the Tier I and Tier II measures make the UI program complex to administer, and may contribute to an environment in which overpayments are more likely. In particular, these officials told us that because the measures are so numerous and are designed to monitor a wide range of activities related to administering the UI program, it is difficult to place sufficient emphasis on more fundamental management issues, such as payment accuracy. There are currently more than 70 Tier I and Tier II measures that gauge how states perform in terms of the timeliness, quality, and accuracy of benefit decisions. These include the timeliness of first payments, the timeliness of wage reports from employers, the quality of appeals decisions, the number of employers that were audited, and the amount of fraud and non-fraud collections. A number of state officials we spoke with told us that it is difficult for states to adequately balance the attention they give to each of the measures because they are so numerous and complex. For example, some states tend to focus most of their staff and resources on meeting certain measures such as payment timeliness, but may neglect other activities such as those dealing with program integrity in the process. Some officials suggested reducing or revising the current measures to make them more manageable. We raised this issue with Labor officials during our review. However, the officials were unable to comment on potential revisions to the measures because a previously scheduled assessment of Labor’s performance measurement system was still ongoing. Labor indicated that revisions could potentially occur based on their ongoing review of the performance management system. In addition to the problems we identified with its performance measures, Labor has been reluctant to hold states accountable by linking their performance in areas such as payment accuracy to the annual administrative budget process. One tool Labor possesses to influence state behavior is the ability to withhold the state’s annual administrative grant.However, this sanction is rarely used because it is generally intended to address instances of serious, sustained noncompliance by a state and is widely viewed as defeating the purpose of the program. Thus, many federal and state officials we interviewed perceive that Labor has few, if any practical tools to compel state compliance with federal program directives. Compounding this problem is the existence of “bottom line authority”—an administrative decision made by Labor in 1986 that gave states greater flexibility over their expenditures and reduced federal monitoring of administrative expenditures. In particular, bottom line authority permits states to move resources among cost categories—such as from benefit payment control activities to claims processing—and across quarters within a fiscal year, as well as use UI administrative resources based on state assessment of its needs. Some officials we spoke with suggested that over time the existence of bottom line authority has hindered Labor’s ability to effectively oversee the program. Given its current administrative authority to oversee the UI program, Labor has not done enough in recent years to encourage states to balance payment timeliness with the need for payment accuracy in a manner that does not require the complete withholding of administrative funds. For example, our review found that in the past, Labor linked the quality assurance process to the budget process and required states to meet specified performance levels as a condition of receiving administrative grants. Moreover, under federal regulations covering grants to states, Labor may temporarily withhold cash payments, disallow costs, or terminate part of a state’s administrative grant due to noncompliance with grant agreements or statutes. Withholding or delaying a portion of the grant funds is one way Labor can potentially persuade states to implement basic payment control policies and procedures. In addition, during the annual budget process, Labor reviews states’ requests for funds necessary to administer their UI programs and ensures an equitable allocation of funds among states. While completing those reviews, Labor could prioritize administrative funding to states to help them achieve or surpass agreed upon payment accuracy performance levels. However, we found that Labor is only using such tools to a limited degree to help states enhance the integrity of their UI program operations. In addition to its overall emphasis on quickly processing and paying UI claims, Labor has been reluctant to use its quality assurance data as a management tool to encourage states to place greater emphasis on program integrity. According to the UI Performs Calendar Year 2000 Annual Report and Labor officials, quality assurance data should be used to identify vulnerabilities in state program operations, measure the effectiveness of efforts to address these vulnerabilities, and help states develop mechanisms that prevent overpayments from occurring.However, as currently administered, Labor’s quality assurance system does not achieve all of these objectives. In particular, Labor lacks an effective mechanism to link its quality assurance data with specific improvements that are needed in states’ operations. For example, over the last decade, payment errors due to unreported income have consistently represented between 20 and 30 percent of annual UI overpayments. While Labor’s quality assurance system has repeatedly identified income reporting as a vulnerable area, it has not always played an active role in helping states develop specific strategies for improving their performance in this area. Of particular concern to us is that the overpayment rate for the nation has shown little improvement over the last 10 years. This suggests that Labor and some of the states are not adequately using quality assurance data to address program policies and procedures that allow overpayments to occur. According to its fiscal year 2003 performance plan, Labor intends to provide states with additional data from its quality assurance system on the sources of overpayments to assist them in crafting better front-end procedures for preventing overpayments. However, unless Labor uses the data to help states identify internal policies and procedures that need to be changed, it is unclear what impact Labor’s efforts will have on improving the integrity of states’ UI programs. Finally, Labor has given limited attention to overpayment collections. Currently, Labor evaluates states’ collection activities using a set of measures called Desired Levels of Achievement (DLA). States are expected to collect at least 55 percent of all the overpayments they establish annually through their benefit payment control operations. This 55 percent performance target has not been modified since 1979 despite advancements in technology over the last decade such as online access to wage and employment information that could make overpayment recovery more efficient. At the time of our review, 34 out of 53 states met or exceeded the minimum standard of 55 percent. The average rate of collections nationwide in that year was about 57 percent. A small number of federal and state officials told us that states tend to devote the minimum possible resources to meet it each year. For example, one state official told us that over time, UI program managers are able to reasonably calculate the number of staff that they must devote to benefit payment control activities in order to meet the minimum level for overpayment recoveries each year. Any additional staff are likely to be moved to claims processing activities. Some officials also indicated that the DLA for collections should be increased. However, our work shows that Labor has not actively sought to improve overpayment collections by requiring states to incrementally increase the percentage of overpayments they recover each year. Labor is taking steps to address some of the vulnerabilities we identified. At the time of our review, Labor was continuing to implement a series of actions that are designed to help states with the administration of their UI programs. These include the following: States use the Information Technology Support Center (ITSC) as a resource to obtain technical information and best practices for administering their UI programs. The ITSC is a collaborative effort involving the Department of Labor, state employment security agencies, private sector organizations, and the state of Maryland. The ITSC was created in 1994 to help states adopt more efficient, timely, and cost-effective service for their unemployment service claimants. Labor provides technical assistance and training for state personnel, as well as coordination and support for periodic program integrity conferences. For example, for the last three years, Labor has conducted at least 4 national training sessions focusing on the quality of UI eligibility decisions, including payment accuracy. Labor requests funding for the states earmarked for program integrity purposes. For example, in 2001, Labor allocated about $35 million for states to improve benefit overpayment detection and collection, eligibility reviews, and field tax audits. Labor also plans to continue its program of offering competitive grants to improve program integrity. For example, Labor awarded the state of Maryland a competitive grant to develop a technical assistance guide on methods for detecting overpayments. Similarly, Labor awarded California a grant in 1998 to develop a guide on best practices for recovering overpayments. In both cases, these guides were made available to all states to help them improve the integrity of their UI programs by identifying sources of information and methods that some states have found to be effective. To facilitate improved payment accuracy in the states’ UI programs, Labor recently included an indicator in its Annual Performance Plan for FY 2003 that will establish a baseline measurement for benefit payment accuracy during 2002. Labor also plans to provide states with additional quality assurance data on the nature and cause of overpayments to help them better target areas of vulnerability and identify more effective means of preventing overpayments. At the time of our review, Labor was also developing a legislative proposal to give state employment security agencies access to the NDNH to verify UI claimants’ employment and benefit status in other states. Our analysis suggests that use of this data source could potentially help states reduce their exposure to overpayments. For example, if the directory had been used by all states to detect claimants’ unreported or underreported income, it could have helped prevent or detect hundreds of millions of dollars in overpayments in 2001 alone. In addition, Labor is working to develop an agreement with the Social Security Administration that would grant states access to the SSA’s SOLQ system. States that used this system would be able to more quickly validate the accuracy of each claimant’s SSN and identity at the time of application for UI benefits. Despite the various efforts by Labor and some states to improve the integrity of the UI program, problems still exist. The vulnerabilities that we have identified are partly attributable to a management approach in Labor and many states that does not adequately balance the need to quickly process and pay UI claims with the need to control program payments. While we recognize the importance of paying UI benefits to eligible claimants in a timely manner, this approach has likely contributed to the consistently high level of overpayments over time, and as such, may have increased the burden placed on some state UI trust funds. As the number of UI claimants has risen over the last year, many states have felt pressured to quickly process and pay additional claims. The results of our review suggest that, in this environment, the potential for errors and overpayments is likely. Labor is taking some positive steps to improve UI program integrity by helping enhance existing state operations. However, absent a change in the current approach to managing the UI program at both the federal and state level, it is unlikely that the deficiencies we identified will be addressed. In particular, without more active involvement from Labor in emphasizing the need to balance payment timeliness with payment accuracy, states may be reluctant to implement the needed changes in their management philosophy and operations. States are also unlikely to voluntarily increase their overpayment recovery efforts. As discussed in this report, Labor already possesses some management and operational tools to facilitate changes in the program. For example, with an increased emphasis on payment accuracy, Labor’s system of performance measures could help encourage states to place a higher priority on program integrity activities. However, an effective strategy to help states control benefit payments will require use of its quality assurance data to identify areas for improvement and work with the states to implement changes to policies and procedures that allow overpayments to occur. Labor could also play a more active role in helping states obtain additional automated tools to verify factors affecting claimants’ UI eligibility, such as identity, employment status, and income, as well as ensuring that these tools are actually used. Key to this is sustaining its efforts to expand state access to SSA’s online database for verifying the accuracy of SSNs and developing more efficient automated means to help states verify claimants’ employment status and any income they may be receiving in other states. Also, Labor already possesses systems such as WRIS that, with some modification, could potentially help states verify claimants’ eligibility information in other states more efficiently. While implementing changes to existing systems would likely entail some additional administrative costs for Labor and the states, the results of this review and our prior work in other programs suggests that the savings that result from enhanced payment accuracy procedures (such as online access to important data sources) and increased attention to preventing and detecting overpayments could outweigh these costs. Finally, Labor must be willing to link state performance in the area of program integrity to tangible incentives and disincentives, such as through the annual administrative funding process. As currently designed and administered, the UI program remains vulnerable to overpayments and fraud. This vulnerability extends to the billions of dollars in additional federal funds recently distributed to the states by Congress. Thus, a coordinated effort between Labor and the states is needed to address the weaknesses we have identified and reduce the program’s exposure to improper payments. Without such an effort, Labor risks continuing the policies and procedures that have contributed to consistently high levels of UI overpayments over the last decade. To facilitate a change in Labor’s management approach that will help to improve UI program integrity, we recommend that the Secretary of Labor develop a management strategy to ensure that the UI program’s traditional emphasis on quickly processing and paying UI claims is balanced with the need for payment accuracy. Such a strategy should include the following actions: Revise program performance measures to ensure increased emphasis on payment accuracy. Use the annual administrative funding process or other funding mechanisms to develop incentives and sanctions that will encourage state compliance with payment accuracy performance measures. Use its quality assurance data more intensively to help states identify internal policies and procedures that need to be changed to enhance payment accuracy. Develop a plan to help states increase the proportion of UI overpayments that are recovered each year. Study the potential for using the WRIS as an interstate eligibility verification tool. Labor generally agreed with our findings and our recommendations. In particular, Labor agreed that existing performance measures emphasize payment timeliness more heavily than payment accuracy, and noted that it is currently in the process of reviewing these measures. Labor also stated that our report does not sufficiently acknowledge the challenges that are inherent in assuring payment accuracy and the current and planned efforts by Labor and the states to address program integrity. We believe that this report fairly characterizes the challenges that states face in balancing the need to make timely payments with the need for payment accuracy. In particular, the report acknowledges the fact that some types of overpayments are more difficult for states to detect and prevent than others, and therefore present additional challenges for states in ensuring payment accuracy. We also list several initiatives that Labor and the states are planning, or are currently implementing to enhance payment accuracy in the UI program. In addition, Labor provided a number of technical comments on our report, which we have incorporated where appropriate. Furthermore, Labor raised one issue in its comments that we believe requires additional explanation. Labor questioned our assessment that it has not fully utilized its quality assurance data to improve state operations. Labor noted that it was responsible for the development of the wage/benefit crossmatch system in the 1970s, and more recently has promoted the states’ use of their state directory of new hires. While these initiatives demonstrate areas where Labor has played a more active role in facilitating the use of better verification tools, Labor’s response does not directly address our finding that it is not systematically using its quality assurance data to identify and correct vulnerabilities in states’ systems. As our report notes, the overpayment rate estimated by the quality assurance system has not significantly improved over the last 10 years. Thus, we continue to believe that Labor and some of the states are not adequately using the quality assurance data to address program policies and procedures that allow overpayments to occur. The entire text of Labor’s comments appears in appendix II. We are sending copies of this report to the Secretary of Labor, the Assistant Secretary of Employment and Training, and other interested parties. Copies will be made available to others upon request. This report is also available at no charge on GAO’s homepage at http://www.gao.gov. If you have any questions concerning this report please contact me at (202) 512-7215, or Daniel Bertoni at (202) 512-5988. Other major contributors are listed in appendix III. Appendix I: Categories of Overpayments Estimated by Labor’s Quality Assurance System (U.S. Totals for 2001) In addition to those named above, Richard Burkard, Cheryn Powell, Frank Putallaz, Daniel Schwimer, John Smale, and Salvatore Sorbello made key contributions to this report. Workforce Investment Act: Improvements Needed in Performance Measures to Provide a More Accurate Picture of WIA’s Effectiveness. GAO-02-275. Washington, D.C.: February 1, 2002. Strategies to Manage Improper Payments: Learning from Public and Private Sector Organizations. GAO-02-69G. Washington, D.C.: October 2001. Department of Labor: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-779. Washington, D.C.: June 15, 2001. Unemployment Insurance: Role as Safety Net for Low-Wage Workers is Limited. GAO-01-181. Washington, D.C.: December 29, 2000. Benefit and Loan Programs: Improved Data Sharing Could Enhance Program Integrity. GAO/HEHS-00-119. Washington, D.C.: September 13, 2000. Supplemental Security Income: Action Needed on Long-Standing Problems Affecting Program Integrity. GAO/HEHS-98-158. Washington, D.C.: September 14, 1998. Supplemental Security Income: Opportunities Exist for Improving Payment Accuracy. GAO/HEHS-98-75. Washington, D.C.: March 27, 1998. Supplemental Security Income: Administrative and Program Savings Possible by Directly Accessing State Data. GAO/HEHS-96-163. Washington, D.C.: August 29, 1996.
The Unemployment Insurance (UI) program is a federal-state partnership to help replace the lost earnings of unemployed persons and to stabilize the economy during a recession. The Department of Labor estimates that $2.4 billion in overpayments were made in 2001, including $577 million attributed to fraud or abuse. Overpayments in the UI program result from management and operational practices at the state and federal level. At the state level, many states do not sufficiently balance the need to quickly process and pay UI claims with the need to control program payments. Moreover, states rely heavily on self-reported information from claimants for other important data, such as a claimant's receipt of other federal or state program benefits and whether they are citizens of the United States. At the federal level, policies and directives from the Department of Labor affect states' priorities and procedures in a manner that makes overpayments more likely.
Through special use permits, the Forest Service authorizes a variety of rights-of-way across the lands it administers. These include commercial uses such as pipelines and power lines and noncommercial uses such as driveways, roads, and trails. In total, there are about 13,000 permits for all rights-of-way. This report focuses on three commercial uses—oil and gas pipelines, power lines, and communications lines. In 1995, there were about 5,600 permits for these uses, which generated about $2.2 million in fees to the government. According to federal law, 25 percent of the fees generated from these permits is returned to the states where they were generated. The remaining 75 percent goes to the U.S. Treasury. The Forest Service administers about 191.6 million acres of land—roughly the size of California, Oregon, and Washington combined. The networks of oil and gas pipelines, power lines, and communications lines that cross the nation frequently go through national forest lands. Where these lands are located near population centers, the demand for land is higher, which thereby increases the value of a right-of-way. In order to best serve their customers, businesses that operate oil and gas pipelines, power lines, and communications lines frequently need to gain access to many miles of land in strips usually 20 to 50 feet wide. These companies negotiate with numerous landowners—both public and private—to gain rights-of-way across their lands. The Federal Land Policy and Management Act (FLPMA) of 1976 and the Mineral Leasing Act (MLA) generally require federal agencies to obtain fair market value for the use of federal lands for rights-of-way. In addition, title V of the Independent Offices Appropriation Act of 1952, as amended in 1982, requires the federal government to levy fair fees for the use of its services or things of value. Under the Office of Management and Budget’s (OMB) Circular A-25, which implements the act, the agencies are normally to establish user fees on the basis of market prices. While there are exceptions to this practice, they are generally reserved for federal, state, and local government agencies and nonprofit organizations. The Forest Service’s current fees for commercial rights-of-way for oil and gas pipelines, power lines, and communications lines frequently do not reflect fair market value. Before 1986, the Forest Service used a variety of techniques to establish fees for rights-of-way. These fees were based on appraisals, negotiations, a small percentage of the permittees’ investment in the land, or a small percentage of the estimated value of the land. However, in 1986 the Forest Service implemented a fee schedule to address the problems that the agency was having in administering the fees for rights-of-way. Agency officials told us that the 1986 fee schedule reflected land values representing the low end of the market. As a result, when the fee schedule was implemented, the fees for rights-of-way near some urban areas were significantly reduced from pre-1986 levels. Before 1986, the Forest Service did not have a consistent system to establish fees for oil and gas pipelines, power lines, or communications lines. The agency’s field staff used different methods for developing the fees for rights-of-way. Some used a percentage of the estimated value of the land or a percentage of the permittees’ investment in the land, while others used appraisals and negotiations with the permittees to set the fees. However, in addition to being inconsistent, these practices resulted in unpredictable fees and appraisals that were subject to an appeals process. At that time, agency officials thought that moving to a fee schedule based on fair market value would resolve these problems. To develop a fee schedule based on fair market value, Forest Service officials, as well as officials from the Department of the Interior’s Bureau of Land Management (BLM), collected market data on raw land values throughout the country. On the basis of these data, the Forest Service and BLM produced a fee schedule in 1986 which charged annual per acre fees that were based on the location and type of the right-of-way. The rates in the fee schedule were indexed to the Implicit Price Deflator to account for future inflation. However, according to Forest Service officials, the agency’s management and the industry viewed the rates as being too high. As a result, the fees in the 1986 schedule were reduced by 20 percent for oil and gas pipelines and 30 percent for power lines and communications lines. Before the reductions, the fees represented average raw land values for federal lands. These values did not consider several factors that are critical to establishing land values that reflect fair market value. Specifically, they did not reflect what the land was being used for, the “highest and best” use of the land, or the values of any urban uses. For example, if these factors are not considered, land located near a large metropolitan area, which might otherwise be used for a residential housing development, would be valued as if it were being used for livestock grazing—a use that would result in a considerably lesser value. As such, according to Forest Service officials, the data used to generate the land values used in the fee system represented the “bottom of the market” and did not reflect fair market value. Nonetheless, the fee schedule established in 1986 is the basis for current fees. The Forest Service officials in the agency’s Lands Division, which is responsible for the rights-of-way program at a national level, estimated that many of the current fees for rights-of-way may be only about 10 percent of the fair market value—particularly for lands near large urban areas. However, agency officials acknowledged that this estimate is based on their professional judgment and program experience and that there are no national data to support it. Because the fee schedule did not reflect several critical factors for determining fair market value, the fees for many rights-of-way, especially in forests near urban areas, were reduced when the fee schedule was implemented in 1986. For example, in the San Bernardino National Forest near Los Angeles, the annual fee for a fiber-optic cable was $465.40 per acre before the fee schedule was implemented and $11.16 afterwards. In the same forest, the annual fee for a power line was $72.51 per acre before the fee schedule and $8.97 afterwards. While these examples are among the most notable, the fees at forests that were not near urban areas frequently were also reduced. For example, in the Lolo National Forest in Montana, the fees for a communications line right-of-way went from $19.88 per acre to $17.23 per acre. Overall, at four of the six national forests where we collected detailed information, we found examples of fees that were reduced when the agency moved to a fee schedule in 1986. The Forest Service and BLM use the same fee schedule for rights-of-way. In March 1995, the Department of the Interior’s Inspector General issued a report which found that BLM’s fee system did not collect fair market value for rights-of-way. In the report, the Inspector General estimated that BLM could be losing as much as $49 million (net present value ) during the terms of the current rights-of-way by charging less than fair market value. At the time of the report, the agency had authorized 30,600 rights-of-way subject to rental payments. To determine how the Forest Service’s fees compare with those charged by nonfederal landowners, we collected and analyzed information on charges for rights-of-way by states and private landowners. We found that state and private landowners frequently charge higher fees than the Forest Service. However, because our analysis is based on a judgmental sample of forests, it is important to note that our findings may not be representative of the situation for the nation as a whole. To compare the Forest Service’s fees with those charged by nonfederal landowners, we collected available data on fees charged by nonfederal landowners in the same states as the forests that we visited. These forests included the San Bernardino National Forest and Angeles National Forest in California, the Arapaho/Roosevelt National Forest in Colorado, the Lolo National Forest in Montana, the Washington/Jefferson National Forest in Virginia, and the Mount Baker/Snoqualmie National Forest in Washington. Our objective was to include forests from different parts of the country, some of which are near urban areas and some of which are in rural areas. Since most nonfederal landowners charge a one-time fee either in perpetuity or for an extended term, such as 30 years, we used a net present value analysis to convert the Forest Service’s annual fees to an equivalent one-time fee, which could then be compared with the one-time fee charged by nonfederal landowners. Table 1 compares the Forest Service’s fees at the six forests we sampled with those charged by nonfederal landowners in the general vicinity of that forest. As table 1 shows, the Forest Service’s fees are frequently less than fees charged by nonfederal landowners for similar rights-of-way. This was the case in 16 of the 17 examples we found during our review. In over half (10) of the examples, the Forest Service’s fees were over $500 per acre less than the fees charged by nonfederal landowners. For example, in 1993 a power company negotiated with a private landowner in Virginia to obtain a right-of-way to run a power line. The power company agreed to pay a one-time fee of $42,280 for 30.2 acres of land, or $1,400 per acre. The Forest Service’s annual fee in 1993 for that part of Virginia was $22.01 per acre. Our use of net present value techniques showed that the right-of-way operator’s annual payment to the Forest Service of $22.01 per acre was equivalent to a one-time payment of $546 per acre. Thus, the Forest Service’s one-time fee was $854 per acre less than the fee charged by the private landowner. Another example from the table shows that in 1995, a natural gas pipeline in California paid a one-time fee of $130,726 per acre for a right-of-way on state land. As the table shows, the Forest Service’s comparable fee is over $129,000 less than the state of California’s fee. While this difference is atypical of other examples we found, it nonetheless demonstrates how a unique parcel of land can have a considerable value. Furthermore, it is an example of how difficult it is to design a fee schedule that can reflect the fair market value of all lands managed by the Forest Service. In addition to collecting comparable data on fees in the same states as the six national forests we visited, we also gathered examples of the rates paid to state and private landowners by the Bonneville Power Administration (BPA)—an electric utility operating in the northwestern United States. BPA runs power lines across hundreds of miles of land owned by the federal government, states, and private entities. We included BPA in our review because during the course of our work, we learned that this utility had extensive data on the rates it was paying for rights-of-way. Therefore, it was a good source of data on fees. The data in table 2 are based on a sample from a database of fees that BPA paid to state and private landowners. The table compares the rates BPA paid to state and private owners with the rates charged by the Forest Service in that area. As table 2 shows, in 12 out of 14 examples, the fees charged by nonfederal landowners were higher than those charged by the Forest Service and in most cases were significantly higher—$100 or more per acre. In 6 of the 14 examples, the fees charged by nonfederal landowners were over $1,000 per acre higher than the fees charged by the Forest Service in the area. For example, in 1990 BPA negotiated with a private landowner in Montana to gain a right-of-way for a power line. BPA and the landowner agreed to a one-time payment of $11,106 for 5.03 acres of land, or about $2,208 per acre. In comparison, in 1990 the Forest Service’s fee schedule produced an annual fee of $14.88 per acre for land located in the same county as the private land. Our use of net present value techniques showed that the annual payment received by the Forest Service of $14.88 per acre was equivalent to a one-time payment of $369 per acre. Thus, the Forest Service’s one-time fee was $1,839 per acre less than the fee charged by the private landowner. In order to meet the requirements of FLPMA, MLA, and OMB Circular A-25, the Forest Service needs to revise and update its current fee system to establish fees that more closely reflect fair market value. The way to accomplish this task is to develop a system that is based on data that reflect current land values. However, each of the several available options for developing such a system has costs and benefits that need to be considered. Many of the industry representatives we spoke with acknowledged that nonfederal landowners generally charge higher fees than the Forest Service. Furthermore, these representatives indicated that they would be willing to pay higher market-based fees if the Forest Service improves its administration of the program by using more market-like business practices. Both the industry representatives and Forest Service officials suggested several changes that, if implemented, could improve the efficiency of the program for both the Forest Service and the industry. The Forest Service has several options available to revise its fee system for rights-of-way to reflect fair market value. Among them are three basic options: (1) develop a new fee schedule based on recent appraisals and local market data; (2) develop a new fee schedule, as noted above, but allow agency staff the alternative of obtaining site-specific appraisals when the fee schedule results in fees that do not adequately reflect the fair market value of a right-of-way; or (3) eliminate the fee schedule and establish fees for each individual right-of-way based on a site-specific appraisal or local market data. The first option involves developing a new fee schedule based on recent appraisals and local market data. This option would include performing some site-specific appraisals of Forest Service rights-of-way and developing an inventory of the rates charged by nonfederal landowners for various types of rights-of-way in the area. These data would be used to formulate a new, more up-to-date fee schedule that would set annual fees for identified areas within a forest. The fee schedule would be used in the same way that the current schedule is used. In this way, the Forest Service could, for the most part, charge annual fees that broadly reflect the fair market value of a right-of-way for an area. The advantage of having a fee schedule, and one of the reasons the agency originally decided to use a fee schedule, is that it is both easy to use and generates fees that are consistent and predictable for the industry. The disadvantage of a fee schedule is that it does not take into account the unique characteristics that may affect the value of a particular parcel of land. Therefore, instances may arise when a fee schedule will charge fees that are significantly different from fair market value—as our analysis has shown. Furthermore, performing appraisals and collecting market data to develop a new fee schedule will cost the agency time and money. However, these additional costs may be offset by the additional revenue that would be generated from the increased fees. Another disadvantage of using a fee schedule is that it carries the administrative burden and cost of having to bill and collect fees every year. A second option available to the Forest Service is a variation of the first option. It too would involve developing a new fee schedule based on recent appraisals and market data. However, under this approach, the fees in the schedule would be used as minimum fees. When it appears that the fees from this schedule do not properly value a right-of-way, the agency would be permitted to obtain an individual site appraisal to determine the fair market value of the site. The fee would then be based on the appraisal instead of the fee in the schedule. This option would offer the ease of use provided by a fee schedule combined with an accounting of the unique characteristics of individual parcels of land as provided for in appraisals. If the agency decided to use this option in developing a new fee system, it would have to develop meaningful criteria for when field staff should seek an appraisal. Otherwise, agency field staff may not seek to obtain appraisals when they are justified. For example, the Forest Service’s current fee schedule contains a provision that permits Forest Service field staff to obtain appraisals. However, basing a fee on an appraisal can only occur when fair market value is 10 times greater than the fee from the fee schedule. This “10-times” rule is viewed by Forest Service officials in headquarters and in the field as being too high and, as a result, serves as a disincentive to obtaining appraisals. In fact, Forest Service headquarters and field staff could recall only one occasion in the past 10 years when this 10-times rule was used. A third option available to the Forest Service is to eliminate the fee schedule and establish fees for each individual right-of-way based on a site-specific appraisal or local market data. Appraisals are a technique commonly used in the marketplace for determining fair market value. By performing site-specific appraisals, the Forest Service could charge fees reflective of the fair market value for each individual permit. The fees could also be based on local market data. This method would be the most appropriate when agency staff are familiar with the fees being charged for nonfederal lands or when recent appraisal data are available from nearby lands. The obvious advantage of obtaining site-specific appraisals is that the practice would result in fees that would accurately reflect the fair market value for each individual permit throughout the Forest Service. As such, it would meet the requirements of FLPMA, MLA, and OMB Circular A-25. Like the other options, the downside of using appraisals is that they could be costly and/or time-consuming and could likely be subject to appeals because of their inherent subjectivity. In addition, this approach could be more difficult to administer than a fee schedule because of the need to perform appraisals on thousands of right-of-way permits across the nation. However, to mitigate this burden, the agency could require the users of rights-of-way to pay for any needed appraisals—something the industry representatives we spoke to agreed with. Industry officials we talked to representing a large segment of the users of rights-of-way indicated that, from their perspective, the value of rights-of-way on Forest Service lands is generally less than the value of similar nonfederal lands because of the administrative problems the prospective permittees may encounter in obtaining Forest Service permits. However, most of the industry representatives we spoke with told us that if the Forest Service improves its administration of the rights-of-way program by using more market-like administrative practices, they would be willing to pay fair market value for rights-of-way on Forest Service lands. While revising its fee system, the Forest Service can do several things to improve the administration of permits for rights-of-way. These include (1) using a more market-like instrument, such as an easement instead of a permit, to authorize rights-of-way; (2) billing less frequently or one time over the term of an authorization instead of annually; (3) providing consolidated billing for operators that have more than one right-of-way permit in a forest or region; and (4) making more timely decisions when processing new authorizations. These improvements would both reduce the agency’s cost of administering rights-of-way and bring about the use of industry practices commonly found in the market. The Forest Service has the authority to make most of these changes. However, MLA requires annual payments for rights-of-way for oil and gas pipelines. Thus, changing fee collection from an annual payment to a one-time payment would require legislative action from the Congress. Instead of employing special use permits to grant right-of-way authorizations, one improvement the Forest Service could make is to grant authorizations using an instrument, such as an easement, that is more commonly found in the market. Special use permits convey rights that are similar to those of easements but not equal to them. Special use permits are revocable. In other words, during the term of a permit, if the agency decides that a right-of-way is no longer consistent with management’s goals for an area of a forest, the agency can revoke the permit and require the operator to remove his investment in the land and leave. Because of this situation, banks do not recognize a permit as granting a value in the land equivalent to that granted by an easement, which is not revocable but can be terminated if the operator breaches the terms and conditions of the easement. The constraint on special use permits affects the users of rights-of-way when they are trying to obtain financing for a project. With a permit, the permittee is also at risk if the Forest Service decides to trade or exchange the land that the right-of-way crosses. In such instances, the permittee must renegotiate a right-of-way with the new landowner. If the Forest Service is going to revise its fee system to reflect fair market value, then the agency also needs a comparable instrument that conveys rights similar to those commonly found in the marketplace. This comparability could best be achieved by issuing easements instead of permits. Permits have been viewed by agency officials as giving the Forest Service more flexibility because it can terminate them if the use is no longer consistent with management’s objectives in a forest. In practice, agency officials indicated that rarely has this flexibility been used to revoke a permit. Another improvement available to the agency in administering rights-of-way is to revise its billing system to eliminate the annual billing of permit fees. Instead, the agency could bill only once for the 20- or 30-year term of an authorization, or perhaps reduce billing to every 5 or 10 years. The agency has the authority to make this change for power lines and communications lines, but it would need to seek authority to do so for oil and gas pipelines. In addition, the agency can consolidate billing for operators that have multiple permits within the same forest or region. One-time billing and consolidated billing would reduce costs to both the agency and the permittee. For example, the Forest Service estimates that it costs the agency an average of about $40 to mail a bill and collect payment for a permit. Over the life of a 30-year permit, the agency’s costs would be $1,200. With 5,600 rights-of-way permits for oil and gas pipelines, power lines, and communications lines, the potential savings for the program could be substantial—roughly $6.7 million ($1,200 x 5,600 permits) over a 30-year term. (The potential savings of $6.7 million has a net present value of about $3.9 million.) If the agency moved to a one-time payment, it would substantially reduce the costs of processing bills in the future. These costs can be further reduced by consolidating billing for multiple permits issued to the same operator within a forest or region. While the agency has made progress in consolidating some bills into “master permits,” industry officials indicated that there remain more opportunities for consolidation. Both one-time billing and consolidated billing are commonly found in the marketplace, and both are supported by industry representatives. Furthermore, moving to a one-time billing process has significant cost-savings implications if and when the Forest Service attempts to increase its fees to reflect fair market value. Specifically, if the Forest Service decides to move to site-specific appraisals to establish fees, as described in the third option, the agency would have to do thousands of appraisals to determine the fees for the current permits. As we noted, under current conditions, this additional workload could be both costly and time-consuming. However, if the agency moved to a one-time billing process and based its fees on site-specific appraisals, then the agency would need to perform an appraisal on each permit only once over a 20- to 30-year authorization period. While the agency would spend more of its resources on appraisals, agency officials indicated that the cost savings of moving to one-time billing would more than cover the additional appraisal costs. Furthermore, the agency can largely negate these costs by requiring the users to pay for any needed appraisals. The industry representatives that we spoke to had no problem with paying for the necessary appraisals as long as the agency also moved to easements and one-time billing. Another improvement to the agency’s administration of rights-of-way is to reduce the time the agency takes to reach a decision on whether to approve a new right-of-way. Industry representatives indicated that it frequently takes months and occasionally years for the Forest Service to reach a decision on whether to approve an application for a new right-of-way permit. Generally, delays in approving applications are the result of a lack of agency staff to perform environmental studies and inconsistent requirements among Forest Service units. Forest Service headquarters officials acknowledged that applications for permits are not processed in a timely manner, and they are now trying to identify opportunities for streamlining the agency’s practices to help address this issue. It is their view that the industry should assume a greater share of the costs of both processing applications for new rights-of-way and administering existing rights-of-way. Industry representatives we spoke with indicated a willingness to pay for application and administration costs. Both agency and industry representatives have been working together to implement and resolve this issue. The Forest Service needs to update its current fees to fair market value for rights-of-way used by operators of oil and gas pipelines, power lines, and communications lines. In most cases, nonfederal landowners charge higher fees for similar rights-of-way. In attempting to arrive at fees based on fair market value, the agency has several options. Each of these options has a number of advantages and disadvantages. The initial costs of developing a new fee system could be substantial because of the need to perform appraisals and collect the market data needed to establish fair market value. These costs could be mitigated, and in some cases negated, with some administrative improvements to the program. Given the tight budgets and resource constraints that all federal land management agencies are experiencing, one option appears to be the most advantageous—obtaining site-specific appraisals that are paid for by the users of rights-of-way. However, to implement this option, a number of other changes would have to be made to the program to make it more market-like and more efficient to administer. To meet the requirements of FLPMA, MLA, and OMB Circular A-25, we recommend that the Secretary of Agriculture direct the Chief of the Forest Service to develop a fee system that ensures that fair market value is obtained from companies that have rights-of-way to operate oil and gas pipelines, power lines, and communications lines across Forest Service lands. While there are a number of options available to accomplish this goal, the option of establishing fees based on local market data or site-specific appraisals paid for by the users of rights-of-way appears to be the most attractive because it collects fair market value for each right-of-way and also reduces the agency’s administrative costs. We also recommend that the Secretary improve the administration of the program by (1) authorizing rights-of-way with a more market-like instrument—specifically, easements; (2) billing once during the term of an authorization or, at a minimum, reducing the frequency of the billing cycle; and (3) consolidating the billing of multiple permits issued to the same operator in a forest or region. To the extent that the agency needs additional authority to charge one-time fees, we recommend that the Secretary seek that authority from the Congress. In addition, we also recommend that the Forest Service continue its efforts to streamline its practices for processing applications for right-of-way authorizations. We provided a draft of this report to the Forest Service and the Western Utility Group—an industry group representing a large number of users of rights-of-way—for their review and comment. We met with officials from the Forest Service—including the Acting Director of the Division of Lands—and with officials from the Western Utility Group, including its Chairman. Both the agency and the Western Utility Group agreed with the factual content, conclusions, and recommendations in the report. While the Forest Service officials agreed with the report’s recommendations, they noted that the recommendations should also include having the Forest Service (1) look for ways to operate more efficiently and (2) manage the rights-of-way program in a more business-like manner. We are not including these points because we believe they are already inherent in our recommendations. The Forest Service officials also stated that the industry should assume a greater share of the costs of both processing applications for new rights-of-way and administering existing rights-of-way. We have revised the report to reflect this comment. Officials from the Western Utility Group provided us with some clarifications on technical issues, which have been included in the report as appropriate. They also noted that while they currently pay nonfederal landowners higher fees for rights-of-way, it is their view that they get more from these landowners than they do from the Forest Service because nonfederal landowners (1) generally use easements, instead of permits, to authorize rights-of-way and (2) are more timely than the Forest Service in responding to requests for rights-of-way. We conducted our review from April 1995 through March 1996 in accordance with generally accepted government auditing standards. We performed our work at Forest Service headquarters and field offices. We also contacted nonfederal landowners and representatives of companies that operate oil and gas pipelines, power lines, and communications lines on federal lands. Appendix II contains further details on our objectives, scope, and methodology. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Secretary of Agriculture, the Chief of the U.S. Forest Service, and the Director of the Office of Management and Budget. We will also make copies available to others on request. Should you have questions about this report or need more information, please call me at (202) 512-3841. Major contributors to this report are listed in appendix III. We were asked by the Chairman, Subcommittee on Oversight of Government Management and the District of Columbia, Senate Committee on Governmental Affairs, to determine (1) whether the fees currently charged to users of Forest Service rights-of-way that operate oil and gas pipelines, power lines, and communications lines reflect fair market value, (2) how the Forest Service’s fees compare with fees charged by nonfederal landowners, and (3) what, if any, changes are needed to the Forest Service’s fee system to ensure that fees reflect fair market value. Our review included rights-of-way managed by the U.S. Department of Agriculture’s Forest Service. Our work addressed the major commercial users of rights-of-way: oil and gas pipelines, power lines, and communications lines. To determine how the Forest Service establishes fees for rights-of-way, we reviewed the laws and implementing regulations governing rights-of-way. Because the Forest Service and the Bureau of Land Management (BLM) worked together to develop the joint 1986 fee schedule for rights-of-way, we reviewed the methods these agencies used to develop the schedule. However, we did not verify the accuracy of the data or the computations used by the agencies in developing this fee schedule. To determine whether the current federal fees reflect fair market value, we reviewed applicable laws and regulations, along with the Department of Agriculture’s requirements for obtaining fair market value on lands it administers. We interviewed representatives of nonfederal entities (states, counties, private companies, and private landowners) to obtain information on commonly accepted techniques for determining fair market value. We also interviewed officials at Forest Service headquarters and field locations. We reviewed rights-of-way in six national forests: the Angeles National Forest and the San Bernardino National Forest in California, the Arapaho/Roosevelt National Forest in Colorado, the Lolo National Forest in Montana, the Washington/Jefferson National Forest in Virginia, and the Mount Baker/Snoqualmie National Forest in Washington. We selected these sites to obtain broad geographical representation and to encompass a high volume of commercial rights-of-way. To determine how federal fees compare with fees charged on nonfederal land, we compared the fee determination methods used by the Forest Service and BLM to those used by states, counties, private companies, and private landowners. For example, we interviewed state and county officials responsible for rights-of-way agreements in California, Colorado, Montana, Virginia, and Washington. We also interviewed commercial land managers who manage private lands in Montana and Virginia. Furthermore, we reviewed the Bonneville Power Administration’s (BPA) settlement records for rights-of-way in Montana and Washington states. In addition, state and county officials, private land managers, and BPA administrators told us what they charged and/or were charged for various types of rights-of-way agreements. Using net present value techniques, we compared these fees with those charged by the federal government. In order to compute the net present value of future payments to the Forest Service, we deflated future payments by 4.2 percent per year. We obtained this number by subtracting expected inflation from the 30-year government bond rate. As of March 21, 1996, the 30-year government bond rate was 6.65 percent, and the WEFA Group’s forecast for inflation was 2.45 percent. (The WEFA Group is a commonly cited, private economic forecasting organization that produces estimates of the long-term economic outlook, including expected inflation.) To obtain views on potential changes to the Forest Service’s fee schedule, we met with officials of the Western Utility Group. This organization represents over 25 major companies that operate oil and gas pipelines, power lines, and communications lines. These companies represent about 75 percent of the energy and communications business in 11 western states. About 74 percent of all the land in the Forest Service is within these 11 western states. (For a list of member organizations of the Western Utility Group, see app. I.) In addition, we interviewed private landowners and Forest Service personnel in each of the states we visited. Finally, we interviewed several BLM field staff to obtain their viewpoints on the fee schedule. Joseph D. Kile The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Forest Service's issuance of rights-of-way on national forest lands, focusing on: (1) whether the fees collected for rights-of-way reflect fair market value; (2) how Forest Service fees compare with fees charged by private landowners; and (3) the changes needed to ensure that Forest Service fees reflect fair market value. GAO found that: (1) Forest Service fees for rights-of-way for oil and gas pipelines, power lines, and communications lines are typically below fair market value; (2) Forest Service fees for rights-of-way are generally less than those charged by nonfederal landowners; (3) options available to the Forest Service for revising its fee determination system include using a new fee schedule based on recent appraisals and local market data, using a new fee schedule with the flexibility to disregard it when its fees are below fair market value, and using site-specific appraisals only; and (4) many rights-of-way users would be willing to pay fair market value for Forest Service rights-of-ways if the Forest Service would improve the administration of its rights-of-way program.
UASs can be categorized by both size and mission, as shown in figure 1. For the purposes of this report, in terms of size, we use the broad categories of “small” and “large” UASs. Small UASs typically weigh less than 55 pounds and can be used for a variety of commercial purposes including photography and package delivery. According to an industry association, small UASs are expected to comprise the majority of UASs that will operate in the national airspace system. Large UASs, depending on their size and purpose, generally fly at higher altitudes and are used for the purposes of surveillance, data gathering, and communications relay. UAS operations are also categorized by how they are being used— their mission—within line of sight of the operator or beyond the line of sight of the operator. For UAS operations within the operator’s line of sight—for example a real estate agent taking photographs of a house—the operator relies only on their vision to avoid colliding with other objects. On the other hand, UAS operations occurring beyond the line of sight of the operator—for example conducting rail or pipeline inspections—requires that FAA segregate the airspace or that the UAS needs instruments to sense other aircraft and obstacles and avoid those obstacles, as well as, other technologies that will keep the aircraft safely operating during its mission. The FAA plays two major roles in integrating UASs into the national airspace—a regulator and a service provider. As the regulator, the FAA seeks to ensure the safety of persons and property in the air and on the ground in part by requiring that UAS operators and manufactures follow specific operation and manufacturing standards. As the service provider, the FAA is responsible for providing safe and efficient air-traffic control services in the national airspace system and the other portions of global airspace. In addition to FAA, many federal and private sector entities have roles in the effort to integrate UAS into the national airspace system. See table 1 for UAS stakeholders and their responsibilities. FAA also partners with a range of industry, federal research entities, universities, and international organizations for research and development on UAS issues. Federally Funded Research and Development Centers and Cooperative Research and Development Agreements typically require an agency, organization, or company to perform specific research and provide FAA with data in exchange for funding. FAA uses these types of agreements to support research and development in critical technologies needed for UAS integration including “detect and avoid” and “command and control” as well as for the dedicated radio- frequency spectrum and “human factors.” Some other partnerships use Other Transaction Agreements to establish the research and development relationship.these take many forms and are subject to requirements that may differ from the federal laws and regulations that apply to contracts, grants, or cooperative agreements, and therefore might not include a requirement to share research results with FAA. Currently, most UAS operations must remain within visual line of sight of the UAS operator. FAA’s long-term goal is to pursue research and development that will advance technology in these critical areas, such as detect and avoid, and supporting beyond-visual-line-of-sight operations. These types of operations, according to an industry group, have the most potential for commercial purposes. In response to the 2012 Act, FAA has been planning for UAS integration into the national airspace and has been taking steps toward increasing UAS operations. The 2012 Act outlined 17 date specific requirements and set deadlines for FAA to achieve safe UAS integration by September 2015. These requirements included developing two planning documents - UAS Comprehensive Plan and the UAS Roadmap. FAA has completed these two requirements in addition to naming six test sites where research and development will occur. However, we found in December 2014 that several other requirements, some key ones, including the publication of a final rule on small UAS operations, had not been completed (see app. II). As part of its role in supporting UAS integration, FAA authorizes all UAS operations (access to the airspace as well as the aircraft itself) in the national airspace system—military; public (academic institutions and, federal, state, and local governments including law enforcement organizations); and civil (non-government including commercial). Depending on the type of user, public or civil, the process for accessing the airspace may be different (see table 2). Currently, since a final rulemaking is not yet completed, FAA only approves UASs access to the national airspace on a case-by-case basis. FAA provides this approval through three different means: Public or Civil Certificates of Waiver or Authorization (COA): A COA is an authorization, generally for up to 2 years, issued by the FAA to an operator for a specific UAS activity. Public entities, including FAA- designated test sites (described in more detail later), and civil entities may apply for a COA to obtain authorized access to the airspace. FAA has a goal to review and approve all COAs within 60 days of being received. Section 333 exemptions: Since September 2014, commercial entities have applied to FAA for exemptions under section 333 of the 2012 Act, Special Rules for Certain Unmanned Aircraft Systems. This exemption requires the Secretary of Transportation to determine if certain UASs may operate safely in the national airspace system prior to the completion of UAS rulemakings. Special Airworthiness Certificates in the Experimental Category and the Restricted Category (Experimental Certificate): Civil entities, including commercial interests, may apply for experimental certificates, which may be used for research and development, commercial operations, training, or demonstrations by manufacturers. While FAA has proceeded in planning for integration, foreign countries are also experiencing an increase in UAS use and planning for integration, and some have begun to allow commercial entities to fly UASs under limited circumstances. Some countries have already established regulations for flying UASs or formal processes for exemptions, while others have taken steps to completely ban all UAS operations. While some countries have worked independently to integrate UAS operations, some international groups, such as the ICAO, are working to harmonize UAS regulations and standards across borders. FAA has taken a number of steps to move toward further safe integration of UAS in the national airspace in response to key requirements of the 2012 Act. FAA has developed the following planning documents: In November 2013, FAA issued the UAS Comprehensive Plan identifying six high-level strategic goals for integrating UAS into the national airspace, including routine public operation and routine civil operations. The Comprehensive Plan provides a phased-in approach for achieving these goals, which will initially focus on public UAS operations, but ultimately will provide a framework for civil UAS operations. According to the plan, each partner agency will work to achieve these national goals and may develop agency-specific plans that are aligned to the national goals and objectives. The DOD’s Unmanned Systems Integrated Roadmap and FAA’s UAS Integration Roadmap, described below, are examples of such plans. In November 2013, FAA also issued the UAS Integration Roadmap, which identified a broad three phase approach to UAS integration— accommodation, integration, and evolution—with associated priorities for each phase. These priorities provide insight into FAA’s near-, mid-, and far-term goals for UAS integration, as shown in figure 2. FAA intends to use this approach to facilitate further incremental steps toward its goal of seamlessly integrating UAS flight into the national airspace. Under this approach, FAA’s initial focus during the accommodation phase will be on safely allowing for the expanded operation of UASs by selectively accommodating some use and demonstrating progress by increasing operations throughout the phase. In the integration phase, FAA plans to develop new operational rules, standards, and guidance, shifting its emphasis toward moving beyond case-by-case approval for UAS use, once technology can support safe operations. Finally, in the evolution phase, FAA plans to focus on revising its regulations, policy, and standards based on the changing needs of the airspace. This phased approach has been supported by stakeholders across both academia and industry. The 2012 Act requires the Roadmap to be updated annually, but as of May 2015 FAA has only issued one version of the Roadmap. FAA intends to update the Roadmap by September 2015 and send it to the Office of Management and Budget for additional review before it is publicly released. While these planning documents provide a broad framework for integration, FAA is still in process of developing an implementation plan for integrating UASs. FAA’s Comprehensive Plan and Roadmap provide broad plans for integration, but are not detailed implementation plans that predict with any certainty when full integration will occur and what resources will be needed. The Department of Transportation’s Inspector General issued a report in June 2014 that recommended FAA develop an implementation plan. Two reports—one from the UAS Aviation Rulemaking Committee and a second internal FAA report—have discussed the importance of an implementation plan and information to include as part of such a plan. The UAS Aviation Rulemaking Committee has emphasized that FAA needs an implementation plan that would identify the means, necessary resources, and schedule to safely and expeditiously integrate civil UAS into the national airspace. The proposed implementation plan contains several hundred tasks and other activities needed to complete the UAS integration process, including: identifying gaps in current UAS technologies, regulations, standards, policies, or procedures; developing new technologies, regulations, standards, policies, and procedures that would support safe UAS operations; and identifying necessary activities to advance routine UAS operations in the national airspace, including the development of guidance materials, training, and procedures for certification of aircraft. An internal FAA report from August 2014 prepared by MITRE Corporation (MITRE) was intended to assist FAA’s development of the key components of an implementation plan.among other actions FAA’s implementation plan should: identify the tasks necessary, responsibilities, resources, and expected time frames for incremental expansion of UAS operations; clarify the priorities for aligning internal resources in support of near- term and long-term integration efforts and provide consistent communication with external stakeholders on the expected progress, cost, and extent of UAS integration during these time periods; align resources supporting UAS integration including allocation of FAA personnel and funds used for contracts and to acquire systems and services in support of integration; and establish the operational, performance, and safety data needed and also the associated infrastructure for collecting, storing, disseminating, and analyzing data, actions that could be a component of an implementation plan. According to FAA, it continues to work with MITRE developing the foundation for a detailed implementation plan. FAA expects MITRE to complete this by September 2015 and FAA to enact the plan by December 2015. According to FAA, the agency used the Aviation Rulemaking Committee’s report when writing the Roadmap and is applying the report prepared by MITRE to help develop the detailed implementation plan. In February 2015, FAA made progress toward the 2012 Act’s requirement to issue a final rule for the operations of small UASs—those weighing less than 55 pounds—by issuing a Notice of Proposed Rulemaking (NPRM) that could, once finalized, allow greater access to the national airspace.To mitigate risk, the proposed rule would limit small UASs to daylight-only operations, confined areas of operation, and visual-line-of-sight operations. This proposed rule also addressed other issues pertinent to UAS operations including aircraft registration, operations in the national airspace, and operator certification. See table 3 for a summary of the rule’s major provisions. FAA’s release of this proposed rule for small UAS operations started the process of addressing remaining requirements of the 2012 Act. FAA’s proposed rule also sought comments on a potential micro-UAS classification (4.4 pounds or less) that would apply to very small UAS being used for authorized purposes. This classification would be based on the UAS Aviation Rulemaking Committee’s recommendations, as well as approaches adopted in other countries, that a separate set of regulations for micro-UASs be created. FAA is considering provisions for micro-UASs classification such as limiting operation airspeed to 30 knots, limiting flight to within visual line of sight, and having aircraft made out of materials that break on impact. The proposed rule would not apply to model aircraft—unmanned aircraft that are flown for hobby or recreational purposes, capable of sustained flight in the atmosphere, and flown within visual line of site of the person who is operating the aircraft—as specified in the 2012 Act. Whether or not a UAS is considered a model aircraft or a small UAS depends upon its operation. For example, if the operator is flying an unmanned aircraft for recreational purposes the unmanned aircraft is considered a model aircraft. If the exact same type of unmanned aircraft is being operated for an authorized purpose such as a search and rescue mission, it is considered a small UAS. The 2012 Act specifically prohibits FAA from promulgating rules regarding model aircraft that meet specific criteria, including model aircraft flown strictly for hobby or recreational use and operated in a manner that does not interfere with and gives way to any manned aircraft. However, the proposed rule would incorporate the 2012 Act provisions that preserve FAA’s authority to pursue enforcement against persons operating model aircraft who endanger the safety of the national airspace system. According to FAA, it may take 16 months to process the comments it receives on the NPRM and develop and issue the final rule for small UAS operations. If FAA takes 16 months, the final rule would be issued in late 2016 or early 2017, about two years beyond the requirement in the 2012 Act. However, during the course of our work, FAA told us that the time needed to respond to a large number of comments could further extend the time to issue a final rule. When the comment period closed on April 24, 2015 FAA had received over 4,500 comments. FAA officials told us that it has taken a number of steps to develop a framework to efficiently process the comments it expects to receive. Specifically, they said that FAA has a team of employees assigned to lead the effort with contractor support to track and categorize the comments as soon as they are received. FAA has also met the requirement from the 2012 Act to create UAS test sites for research and development. Specifically, in December 2013 FAA selected six UAS test site locations, which all became operational between April 2014 and August 2014. According to FAA, these sites were chosen based on a number of factors including geography, climate, airspace use, and a proposed research portfolio that was part of the application. Under FAA policy, all UAS operations at a test site must be authorized by FAA through either the use of a COA or an experimental certificate. In addition, FAA does not provide funding to support the test sites. Thus, these sites rely upon revenue generated from entities, such as those in the UAS industry that are using the sites for UAS flights. The 2012 Act authorized the test sites to operate until February 14, 2017. FAA stated it is too early to assess the test sites’ results and effectiveness and thus whether the test sites should be extended. According to FAA officials, FAA does not object to extending the test sites but may need additional resources if that happens. Although it still relies on case-by-case approvals, FAA has increased UAS operations during the accommodation phase of UAS integration. As we have previously noted, UAS operators can only gain access to the national airspace by obtaining a COA, an experimental certificate, or a section 333 exemption. From 2010 to 2014, the total number of COAs approved for public operations has increased each year, with FAA issuing 403 COAs thus far this year, as shown in table 4. Similarly, from 2011 to 2014, the total number of experimental certificates has increased each year, with FAA issuing 6 thus far this year. In September 2014, FAA granted the first section 333 exemptions; at that time 6 total exemptions were granted for commercial UAS operations to movie and TV production companies. As of June 9, 2015, FAA had granted 548 section 333 exemptions to companies for a variety of additional commercial operations supporting the real estate, utility, and agriculture industries, among others. See figure 3 for examples of commercial uses, including some approved under section 333 exemptions. FAA has taken steps to make access easier for those operating UASs under a section 333 exemption. On March 23, 2015, FAA established an interim policy to speed up authorizations for certain commercial unmanned aircraft operators that request Section 333 exemptions. According to FAA, the new policy helps bridge the gap between the past “case-by-case” approval process, which evaluated every commercial UAS operation individually, and future operations after they publish a final version of the proposed small UAS rule. Under the new policy, the FAA will grant a COA for flights at or below 200 feet to any commercial UAS operator with a Section 333 exemption for aircraft that weigh less than 55 pounds, operate during the daytime, operate within visual line of sight of the pilots, and stay certain distances away from airports or heliports. According to FAA, the “blanket” 200-foot COA allows flights anywhere in the country except restricted airspace and other areas, such as major cities, where the FAA prohibits commercial UAS operations.expects the new policy will allow companies and individuals who want to use UAS within these limitations to start flying much more quickly than before. A company wanting to operate above 200 feet, or outside the other rules set up by FAA, must obtain a separate COA. FAA took additional steps in May 2015 to work with industry to safely expand UAS operations. FAA announced its Pathfinder Program that will partner FAA with companies to perform research in support of UAS integration. These companies will focus on using UAS for specific applications, such as news gathering and surveying crops. In addition, two will focus on applications for beyond the visual line of sight of the operator. One industry stakeholder stated the next step would be to develop additional mechanisms to allow UAS operations beyond the visual line of sight of the operator once technology supports greater use. While accommodating UAS access, FAA and industry have taken steps to educate UAS operators on how to safely operate. UAS industry stakeholders and FAA have begun an educational campaign that provides prospective users with information and guidance on flying safely and responsibly. Specifically, they launched an informational website for UAS operators to ease public concerns about privacy and support safer UAS operations in the national airspace.announced plans to develop the “B4UFLY” smartphone application designed to help UAS users, both model aircraft and recreational UAS operators, know where it is safe and legal to fly. The application is designed to let an operator know if it is safe and legal to fly in a specific location. FAA has worked with federal and industry stakeholders to coordinate federal activities in support of conducting research and development and creating UAS standards to facilitate UAS integration. As with other large government-wide initiatives, achieving results for the nation increasingly requires that federal agencies and others work together. FAA has worked with the UAS Executive Committee to facilitate federal UAS activities and RTCA Special Committee 228, ASTM International Committee F38, and the UAS Aviation Rulemaking Committee to develop safety, reliability, and performance standards for UAS. Each collaborative group has defined different long-term goals in support of UAS integration and has made progress toward the achieving these goals. The Executive Committee’s long-term goals involve working to solve the broad range of technical, procedural, and policy issues affecting UAS integration into the national airspace. In support of this objective, the Executive Committee agencies, other public agencies, and industry have also developed processes and procedures to safely demonstrate small UAS operations in remote areas of the Arctic, including beyond-line-of-site operations. The UAS demonstration occurred in domestic and international airspace on and off the coast of Alaska. RTCA Special Committee 228 has set out its own goals across two phases. Currently working toward completion of the first phase, RTCA is developing minimum operational performance standards for detect and avoid and command and control technologies for UASs. RTCA has made progress toward this goal with help from the Executive Committee. Specifically, the Executive Committee’s Science and Research Panel developed a definition of “well clear” to help inform RTCA Special Committee 228’s work. The UAS Aviation Rulemaking Committee has a goal to develop a report for FAA on its efforts to provide direction for UAS operational criteria, among other tasks, by April 18, 2016. ASTM International F38’s long-term goal involves developing and publishing voluntary consensus standards for small and large UASs as FAA requests them. ASTM International Committee F38 has developed standards and recommendations to support FAA’s small UAS rulemaking that cover elements such as systems design, construction, and testing. FAA has applied other interagency collaborative methods in support of UAS integration including memorandums of understanding or agreement (MOU) and conferences. FAA entered into MOUs with DOD and NASA to expedite the COA process and ensure the availability of DOD’s data. According to FAA officials, the MOUs eased collaboration with DOD and NASA because the MOUs established roles and responsibilities for each agency as well as procedures for DOD to obtain COAs. In addition, FAA convenes meetings with test site officials and attends conferences where UAS issues are discussed. For example, FAA regularly holds conference calls and convenes technical interchange meetings with test site officials to address test site issues. According to FAA, the technical interchange meetings are opportunities for FAA to provide updates to the test sites and discuss common areas of research interest. The manager of the FAA’s UAS Integration Office has presented information about its UAS efforts during industry conferences, such as the Association for Unmanned Vehicle Systems International’s annual meeting. These conferences allow FAA to provide guidance and updates directly to the industry and public. Since being named in December 2013, the six designated test sites have become operational, applying for and receiving authorization from FAA to conduct test flights. Specifically, from April through August 2014, each of the six test sites became operational and signed an Other Transaction Agreement with FAA, establishing their research and development relationship. All flights at a test site must be authorized under the authority of a COA or an experimental certificate approved by FAA. Since becoming operational, five of the six test sites received 48 COAs and one experimental certificate in support of UAS operations resulting in over 195 UAS flights across the five test sites. These flights provide the operations and safety data to FAA as required by the COA. While there are only a few contracts with industry thus far, according to test site operators, these will be important if the test sites are to generate sufficient revenue to remain in operation. Table 5 provides an overview of test-site activity since the sites became operational. According to all test sites, FAA approval for access to the airspace can be a lengthy process taking 90 days or even longer. FAA and the test sites have found ways to allow quicker access to the test site airspace and relieve some administrative burden from FAA. In February 2015, FAA awarded the Northern Plains Test Site in North Dakota four broad area COAs that were aircraft specific. According to a test site official, these COAs allowed designated aircraft to fly over nearly the entire state of North Dakota and will make it easier to accommodate industry for research. Furthermore, these COAs were a positive step in allowing quicker access to the airspace at test sites. Reducing FAA’s role in the process creates more certainty regarding how long it will take an operator to access airspace at the test sites. Specifically, the test site representative indicated FAA’s role was reduced because there was a process that allowed aircraft to be added to these existing COAs that was simpler than applying for individual COAs. In May 2015, FAA approved a “blanket” COA allowing the test sites to conduct UAS operations at or below 200 feet anywhere in the national airspace, similar to the authority provided to Section 333 exemptions. In particular, these COAs will be for small UAS operations, during the day, within line of sight of the operator, and the operations cannot occur in restricted airspace and areas close to airports. According to FAA, this will help improve UAS access allowing more operations in support of research that can further the UAS integration process. Previously, all UASs needed their own COA when operating at a test site but this action by FAA will allow any small UASs to operate at the test sites within the COA’s requirements. The use of designated airworthiness representatives by the test sites to review and approve experimental certificates may be quicker for industry and relieve some of FAA’s workload. Industry benefits from not having to lease its aircraft to the test site, as all test sites are operated by public entities (academic institutions or federal, state, or local governments) and thus all aircraft must be public aircraft, unless they are operating under a special certificate. In addition, any industry group working with the test site would not have to go through FAA to receive the experimental certificate. The Nevada test site has affiliated itself with a designated airworthiness representative, who has approved an aircraft to operate under an experimental certificate for the Nevada test site. According to FAA, the use of a designated airworthiness representative allows it to better leverage its resources. FAA officials and some test site officials told us that progress has been made in part because of FAA’s and test sites’ efforts to work together. Test site officials meet every 2 weeks with FAA officials to discuss current issues, challenges, and progress. According to meeting minutes, these meetings have been used to discuss many issues from training for designated airworthiness representatives to processing of COAs. In addition, the six test sites have developed operational and safety processes that have been reviewed by FAA. Thus, while FAA has no funding directed to the test sites to specifically support research and development activities, FAA dedicates time and resources to supporting the test sites, and FAA staff we spoke to believe test sites are a benefit to the integration process and worth this investment. Despite the progress made since they began operating, according to test site operators, they faced a number of challenges in the first year of operations: Guidance on research: According to FAA, because the test sites receive no federal funding, FAA can neither direct specific research to be conducted nor direct the test sites to share specific research data, other than the operations and safety data required by the COA. The Other Transaction Agreement for each test site defines the purpose of the test sites as a place to conduct research and testing under FAA safety oversight to support UAS integration into the national airspace. The Other Transaction Agreement indicates the test sites will provide FAA with UAS research and operational data to support the development of procedures, standards, and regulations. However, FAA officials told us that the Antideficiency Act may prevent the agency from directing specific test site activities without providing compensation. In October 2014, FAA provided a list of potential research areas to the test sites to guide the research that each test site may conduct. According to FAA, this document was not to be construed as a directive but more as guidance for possible research areas. However, three test sites told us this document was too broad to be considered guidance for the research test sites should conduct. Companies Conducting Beyond- Visual-Line-of-Sight Testing Overseas Insitu, Inc., a Boeing subsidiary, conducted beyond-visual-line-of-sight testing in Denmark in May, 2015 with a ScanEagle UAS. The flights took place in cooperation with the Danish Transport Authority as part of an agreement signed by Boeing and the airport to develop a UAS Test Center in Denmark, which is used for training, testing, and development. The activity included members of the public and private sector, including the UAS Denmark Consortium, a group of companies, government organizations, and other entities supporting UAS industry development. The testing demonstrated capabilities for a variety of industries, including agriculture and aerial surveying, emergency and natural-disaster response, and defense and Arctic surveillance. aircraft, meaning that civil operators still have to lease the aircraft to the test site for operations. But, as one test site representative stated a broad area COA, allowing civil operations at the test site would be even more beneficial. Another test site representative indicated that they will continue to work with FAA to make access easier allowing flights at higher altitude with different aircraft. Maintaining operations: While all the test sites had some level of initial funding, from either private industry or state legislatures, to become operational, they must attract UAS industry to the test sites to generate enough revenue to maintain operations. However, test site operators reported that test sites have additional requirements as opposed to operating outside the test sites, including leasing the aircraft to the test site to operate under the public COA. While test sites have signed 22 contracts, there is a chance that some test sites will not survive due to the financial burden. Some companies have made a decision to go to other countries to conduct UAS testing because they believe it takes less time to be approved for test flights. For example, Amazon has reported it has testing under way in multiple countries outside the United States, including a site in Canada. In an effort to attract some industry operators, the Pan Pacific Test Site has a location in Iceland where, according to the Director, review and approval for test flights can happen much faster, in as few as 10 days, relative to over 90 days a COA may take in this country. In addition, the UAS industry is conducting tests in this country outside the test sites. For example, CNN worked with Georgia Institute of Technology. FAA has used cooperative research and development agreements, federally funded research and development centers, and grants to conduct other UAS research and development. These agreements for research are similar to the Other Transaction Agreement that directs the purpose and goals of the relationship between FAA and the research entities. However, unlike the Other Transaction Agreement in place for the test sites, according to FAA, many of these agreements have language specifically addressing the sharing of research and data. The following are examples of other resources FAA has devoted to UAS integration research and development: Cooperative research and development agreement: New Mexico State University has had a flight test center operating for several years under a cooperative research and development agreement with FAA. The center serves a similar purpose to the designated test sites but has been operating since 2007. The flight test center has conducted research in many areas including nighttime flying and more recently research into long-endurance UAS flights operating between 10,000 and 17,000 feet. According to an official, the New Mexico State University’s flight test center has challenges with getting access to the airspace for customers because the process to receive a COA can be lengthy. In addition, this official told us the flight test center would like authority to approve COAs to operate at the test center because the FAA is backlogged and therefore approvals are delayed. Finally, according to the flight test center operators, FAA can get data from the research being conducted at the test site but does not direct them what to provide. While the flight test center has operated under a Cooperative Research and Development Agreement since 2007, in May 2015 the Flight Test Center switched to an Other Transaction Agreement to continue UAS testing. Federally funded research and development center: MITRE manages federal funded research and development centers for multiple federal agencies including FAA and DOD. MITRE has ongoing work supporting FAA’s UAS integration effort by supporting UAS standards and rulemaking and supporting research planning and progress, among other efforts. MITRE brings together the federal agencies— FAA, NASA, DOD, DHS, and others—to advance UAS integration. In its role, according to MITRE officials, one of the biggest challenges is how to integrate all the UAS-related work across the federal government, academia, and private sector. Grants: In August 2014, FAA awarded two grants to Georgia Tech Research Corporation and the University of North Dakota to conduct literature reviews of UAS issues. Georgia Tech is collecting information on research being conducted on the effect of UAS collisions on other airborne and ground based objects. The University of North Dakota is looking at the UAS safety criteria and particularly if UASs could be deadly. According to FAA, both studies will support ongoing UAS research and help determine the applicability of past studies. Center of Excellence: In May 2015, FAA selected a team led by Mississippi State University as the Center of Excellence for UAS.According to FAA, the goal of the Center of Excellence will be to create a cost-sharing relationship among academia, industry, and government that will focus on the primary research areas needed to support UAS integration. FAA hopes the center could provide both short- and long-term research through testing and analysis. In support of it serving this purpose, the Center of Excellence has an annual $500,000 budget for the next 10 years. FAA also has additional resources to support the UAS integration, including facilities working on research and development and management of FAA’s other research and development efforts for UAS integration. FAA’s William J. Hughes Technical Center houses staff in charge of supporting and managing FAA’s designated test sites. While the test sites do not have specific funding, FAA has dedicated resources located at Hughes Technical Center to support the set up and ongoing operations of the test sites. For example, COA data are collected and analyzed at the Hughes Technical Center. In addition, FAA has participated in the twice-a-year technical interchange meetings with the test sites. These meetings have brought together the test sites and FAA to address issues in the set-up and operation of the test sites. Furthermore, FAA has staff supporting the test sites through review of test site operation and safety procedures and manuals to support the monthly reporting of the operational and safety data required by each COA. The William J. Hughes Technical Center is located in Atlantic City, New Jersey, and contains laboratories supporting aviation research, development, testing, and evaluation of air traffic control and aircraft safety among other aviation areas. It also serves as the primary facility supporting the Next Generation Air Traffic System. According to numerous studies and stakeholders we interviewed, many countries around the world, have been allowing commercial UAS operations in their airspace for differing purposes. We also identified a number of countries that allow commercial UAS operations and have done so for years. Specifically, Canada and Australia have regulations pertaining to UAS that have been in place since 1996 and 2002, respectively. According to a recent MITRE study, the types of commercial operations allowed vary by country and include aerial surveying, photography, and other lines of business. For example, Japan has allowed UAS operations in the agriculture industry since the 1980’s to help apply fertilizer and pesticide. EASA is the European Union Authority in aviation safety. The main activities of the organization include the strategy and safety management, the certification of aviation products and the oversight of approved organizations and EU Member States. require a risk assessment of the proposed operation and an approval to operate under restrictions specific to the operation. The final proposed category, certified operations, would be required for those higher-risk operations, specifically when the risk rises to a level comparable to manned operations. This category goes beyond FAA’s proposed rules by proposing regulations for large UAS operations and operations beyond the pilot’s visual line of sight. As other countries work toward integration, standards organizations from Europe and the United States are coordinating to try and ensure harmonized standards. Specifically, RTCA and the European Organization for Civil Aviation Equipment (EUROCAE) have joint committees focused on harmonization of UAS standards. We studied the UAS regulations of Australia, Canada, France, and the United Kingdom and found that these countries impose similar types of requirements and restrictions on commercial UAS operations. For example, all these countries except Canada require government-issued certification documents before UASs can operate commercially.addition, each country requires that UAS operators document how they ensure safety during flights and their UAS regulations go into significant detail on subjects such as remote pilot training and licensing requirements. For example, the United Kingdom has established “national qualified entities” that conduct assessments of operators and make recommendations to the Civil Aviation Authority as to whether to approve that operator. Similar regulations in these countries continue to evolve. In November 2014, Canada issued new rules creating exemptions for commercial use of small UASs weighing 4.4 pounds or less and from 4.4 pounds to 55 pounds. UASs in these categories can commercially operate without a government-issued certification but must still follow operational restrictions, such as a height restriction and a requirement to operate within line of sight. Transport Canada officials told us this arrangement allows them to use scarce resources to regulate situations In of relatively high risk. Australia, in similar fashion, is considering relieving UASs lighter than 4.4 pounds from its requirement to obtain a UAS Operator’s certificate. France’s regulation describes 4 weight-based categories of UAS as well as 4 operational scenarios of increasing complexity. The regulations then discuss which UAS categories can operate in each scenario. FAA, by electing to focus on UASs up to 55 pounds in its Small UAS NPRM, has taken a similar risk-based approach. The United States has not yet finalized regulations specifically addressing its small UAS operations, but if UASs were to begin flying today in the national airspace system under the provisions of FAA’s proposed rules, their operating restrictions would be generally similar to regulations in these other four countries. However, there would be some differences in the details. For example, FAA proposes altitude restrictions of below 500 feet, while Australia, Canada, and the United Kingdom restrict operations to similar but slightly lower altitudes. Other proposed regulations require that FAA certify UAS pilots prior to commencing operations, while Canada and France do not require pilot certification in certain low risk scenarios. While FAA continues to finalize the small UAS rule—a process that could take until late 2016 or early 2017—other countries continue to move ahead with UAS integration. Thus, when the rule is finalized the operating restrictions in this country may be well behind what exists in other countries if the final rule reflects the proposed rule. Table 6 shows how FAA’s proposed rules compare with the regulations of Australia, Canada, France, and the United Kingdom. While regulations in these countries generally require that UAS operations remain within the pilot’s visual line of sight, some countries are moving toward allowing limited operations beyond the pilot’s visual line of sight. For example, according to Australian civil aviation officials, they are developing a new UAS regulation that would allow operators to request a certificate allowing beyond-line-of-sight operations. However, use would be very limited and allowed only on a case-by-case basis. Similarly, according to a French civil aviation official, France approves on a case- by-case basis, very limited beyond-line-of-sight operations. Finally, in the United States, there have been beyond-line-of-sight operations in the Arctic, and NASA, FAA and the UAS industry have successfully demonstrated detect-and-avoid technology, which is necessary for beyond line-of-sight operations. Like the United States, Australia, Canada, France, and the United Kingdom distinguish between recreational model aircraft and commercial UASs and have issued guidelines for safe operation. For example, the United Kingdom defines model aircraft as any small unmanned aircraft, weighing less than 44 pounds, or large unmanned aircraft weighing more than 44 pounds, that is used for sporting and recreational purposes. Australia makes no practicable distinction between a small UAV and a model aircraft except that of use—model aircraft are flown only for the sport of flying them. However, Australia also defines a giant model aircraft as one weighing between 55 pounds and 331 pounds. Approvals for commercial UAS operations have increased in these four countries and some allow more commercial operations. Since 2011 the number of approvals for commercial operations in France has increased every year. According to a Civil Aviation Authority official, in 2014, there were about 3,600 commercial UAS operators. In Canada, according to a Transport Canada official, there were over 1,600 approvals for commercial and research related UAS operations in 2014. As previously mentioned, certain commercial operations in Canada do not need approval as of November 2014 and there may be even more UAS operations. The United Kingdom’s Civil Aviation Authority attributes the growth to UASs in their country to the UASs’ becoming less expensive and simpler to operate. In the United Kingdom, as of February 2015, there were 483 commercial UAS operators, and this number has increased every year since 2010. Similar to the United Kingdom, Australia has seen an increase in commercial UAS operators since 2010 with currently over 200 approved commercial operators. Australia’s Parliament attributes the growth to improvements in UASs’ piloting and control technologies, as well as reductions in UAS prices. With FAA’s approvals for commercial exemptions exceeding 500 as of June 9, 2015, the United States has closed the gap with some other countries’ level of commercial use. Other countries face challenges that are common across some countries, including the United States, trying to integrate UAS operations. Specifically, some of the challenges are: Technology shortfalls and unresolved spectrum issues. Technology needs and concerns about available spectrum constrain full integration of UASs into airspace with manned aircraft in the United States and in countries around the world. UASs’ current inability to detect and avoid other aircraft, the lack of a standard for command and control systems, and no dedicated and secure frequency spectrum are technical challenges preventing full UAS integration into the national airspace. However, organizations around the world are looking to address these technology issues and develop standards to support safe UAS operations. At the worldwide level, the International Civil Aviation Organization is addressing how UAS integration would affect its existing standards. At the European level, the European UAS Roadmap contains a strategic research and development plan that describes anticipated deliverables along with key milestones, timelines, and resources needed. Separate from the international organizations, researchers in individual countries are also addressing these challenges. For example, in February 2014, Australian researchers achieved what was then believed to be a world-first breakthrough for small UASs by developing an onboard system that has enabled a UAS to detect another aircraft using vision while in flight. Safe operations by recreational users. Countries around the world also face challenges in ensuring that UAS purchasers operate them safely. As UASs become more affordable, and increasingly available some individuals are conducting unsafe or illegal UAS operations. In July 2014, Australia’s Parliament reported on testimony, from several witnesses, that UASs are being flown by operators who unknowingly break safety rules, thereby posing a safety risk to manned aircraft and persons on the ground. In response to unsafe operations, a few countries have placed outright bans on UAS operations. For example, in India, in response to the surge in interest for commercial and recreational use, the government placed an outright ban on any UAS use until its civil aviation agency issues regulations. Similar to the “Know Before You Fly” education campaign in this country, other countries have sought to educate operators. For example, the United Kingdom has developed and distributed a brochure describing safe flying practices. In Australia, UAS purchasers receive a similar document when they purchase the product. Canada has launched a national safety awareness campaign for UASs, which aims to help Canadians better understand the risks and responsibilities of flying UASs. In addition, Transport Canada has set up a web page that provides safe guidelines for flying UASs and answers frequently asked questions While countries face some UAS integration challenges that are similar to the United States, other challenges such as airspace complexity and ease of regulatory change, can make integration in this country more difficult. Airspace complexity is one aspect in which the United States differs from other countries. According to FAA, the U.S. airspace is the busiest and most complex in the world, where UASs, after integration, would share with more than 300,000 general aviation aircraft, ranging from amateur-built aircraft, rotorcraft, and balloons, to highly sophisticated Introducing potentially large numbers of UAS turbojets (executive jets).by hobbyists, farmers, law enforcement agencies, and others would add to this complexity. In contrast, according to a study by MITRE, other countries have fewer aviation aircraft, a situation that may make integrating UAS easier. For example, the U.K. has about 20,000 registered general aviation aircraft, while Australia has around 8,400. A study conducted by MITRE for FAA indicated this factor as one that can affect the speed of change and adaptation in various aviation environments. We provided a draft of this report to Department of Transportation (DOT) for review and comment. In comments, which were provided in an email, DOT stated that the report addresses many of the challenges of UAS integration but does not address any environmental concerns and that the report should state that it did not examine the environmental considerations of UAS integration. DOT further noted that FAA is conducting research to understand the environmental impacts of UAS integration, the role that UASs play in National Environmental Policy Act compliance, and the applicability of noise standards regulations to UASs.integration. The discussion of challenges in the report does not mention environmental concerns because it focuses on challenges the test sites faced during their first year of operation, as reported by the test site operators. We did clarify in our scope and methodology description that we did not cover environmental considerations of UAS integration. DOT also provided technical comments on the draft that we incorporated as appropriate. We did not examine the environmental considerations of UAS As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Transportation and the appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or dillinghamg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report focuses on FAA’s efforts to develop procedures to allow UAS use within the national airspace system. Specifically, we reviewed (1) the status of FAA’s progress toward safe integration of UAS into the national airspace, (2) research and development support from FAA’s test sites and other resources, and (3) how other countries have progressed toward UAS integration into their airspace for commercial purposes. To address the three objectives, we reviewed and synthesized a range of published reports from GAO and FAA that included general background information on a variety of related issues, such as FAA’s framework for UAS integration, efforts to accommodate ongoing research and commercial UAS use, and UAS technology challenges. We reviewed other relevant background literature on related issues, including results from databases, such as ProQuest® and Nexis®, trade publications, literature from industry stakeholder groups, and information from the Internet. We also reviewed provisions of the FAA Modernization and Reform Act of 2012, and the Notice of Proposed Rulemaking for small UAS operations. In addition, we reviewed more detailed and specific documentation related to the different objectives, as described below. To determine FAA’s progress toward safe integration of UAS into the national airspace, we: Reviewed documents provided by officials and conducted semi- structured interviews with officials at federal agencies, including the FAA’s Unmanned Aircraft Systems Integration and Research and Development Offices, the Department of Defense (DOD), the National Aeronautics and Space Administration (NASA), and the Department of Homeland Security. We reviewed FAA’s Comprehensive Plan and Roadmap for UAS integration. Interviewed representatives from FAA’s Joint Planning and Development Office, the UAS Aviation Rulemaking Committee, RTCA, and MITRE Corporation as well as voluntary standards development organization ASTM International. We interviewed representatives from the Association for Unmanned Vehicle Systems International, Aircraft Owners and Pilots Association, American Institute of Aeronautics and Astronautics, and the Academy of Model Aeronautics. We also obtained information on the Federal Modernization and Reform Act of 2012 Section 333 exemptions FAA granted from FAA and http://www.regulations.gov2014 to May 2015. Reviewed documents provided by and interviewed federal and industry representatives from the collaborative groups—the Executive Committee, RTCA Special Committee 228, UAS Aviation Rulemaking Committee, and ASTM International Committee F38—and industry groups that are involved in FAA’s efforts to integrate UAS into the national airspace system To identify research and development support from FAA’s test sites and other resources, we: Reviewed and analyzed documents from each of the six test sites where FAA has recently allowed UAS operations including the applications submitted by the selected test sites and quarterly reports provided to FAA. Conducted semi-structured interviews with officials from the test sites, including the State of Nevada, the University of Alaska, the North Dakota Department of Commerce, Griffiss International Airport, the Virginia Polytechnic Institute & State University, and Texas A&M University Corpus Christie to determine the issues encountered in an effort to become operational, conduct research, share the research results with FAA, and receive support or guidance from FAA. Spoke with representatives from other universities with centers of research on UAS technology and issues, including New Mexico State University, Massachusetts Institute of Technology Lincoln Laboratory, the Humans and Autonomy Lab at Duke University, and the Georgia Institute of Technology to obtain information about the resources FAA has dedicated to conducting other UAS research and development. To identify how the United States compares to other countries in the progress and development of UAS use for commercial purposes, we: Developed case studies for four countries that have made progress in integrating UASs into their national airspace—France (Direction générale de l’aviation civile); the United Kingdom (UK Civil Aviation Authority); Australia (Australia Civil Aviation Safety Authority); and Canada (Transport Canada Civil Aviation). We selected these countries based on several factors including the status of regulatory requirements for commercial UASs, beyond-line-of-site activities, and whether the country allows non-military UAS to operate in the airspace. We obtained the UAS regulations of each country and interviewed civil aviation authorities in each to obtain additional information about the issues encountered with UAS. Interviewed other stakeholders familiar with the UAS activities currently occurring in other countries to determine the factors that influenced their country’s policies regarding UASs including the International Civil Aviation Organization (ICAO). We conducted this performance audit from January 2014 to July 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. FAA Modernization and Reform Act of 2012 requirement Enter into agreements with appropriate government agencies to simplify the process for issuing Certificates of Waiver or Authorization (COA) or waivers for public unmanned aerial systems (UAS). Status of action In process—memorandum of agreement (MOA) with DOD signed Sept. 2013; MOA with Department of Justice signed Mar. 2013; MOA with NASA signed Mar. 2013; MOA with Department of Interior signed Jan. 2014; MOA with the Office of the Director, Operational Test and Evaluation (DOD) signed Mar. 2014; MOA with National Oceanic and Atmospheric Administration still in draft. Expedite the issuance of COAs for public safety entities Establish a program to integrate UASs into the national airspace at six test ranges. This program is to terminate 5 years after date of enactment. Develop an Arctic UAS operation plan and initiate a process to work with relevant federal agencies and national and international communities to designate permanent areas in the Arctic where small unmanned aircraft may operate 24 hours per day for research and commercial purposes. Determine whether certain UAS can fly safely in the national airspace before the completion of the Act’s requirements for a comprehensive plan and rulemaking to safely accelerate the integration of civil UASs into the national airspace or the Act’s requirement for issuance of guidance regarding the operation of public UASs including operating a UAS with a COA or waiver. Develop a comprehensive plan to safely accelerate integration of civil UASs into national airspace. Issue guidance regarding operation of civil UAS to expedite COA process; provide a collaborative process with public agencies to allow an incremental expansion of access into the national airspace as technology matures and the necessary safety analysis and data become available, until standards are completed and technology issues are resolved; facilitate capability of public entities to develop and use test ranges; provide guidance on public entities’ responsibility for operation. Make operational at least one project at a test range. Approve and make publically available a 5-year road map for the introduction of civil UAS into national airspace, to be updated annually. Submit to Congress a copy of the comprehensive plan. Publish in the Federal Register the Final Rule on small UAS. FAA Modernization and Reform Act of 2012 requirement Publish in the Federal Register a Notice of Proposed Rulemaking to implement recommendations of the comprehensive plan. Publish in the Federal Register an update to the Administration’s policy statement on UAS in Docket No. FAA-2006-25714. Achieve safe integration of civil UAS into the national airspace. Publish in the Federal Register a Final Rule to implement the recommendations of the comprehensive plan. Develop and implement operational and certification requirements for public UAS in national airspace. In addition to the contact named above, the following individuals made important contributions to this report: Brandon Haller, Assistant Director; Geoffrey Hamilton, Daniel Hoy, Eric Hudson, Bonnie Pignatiello Leer, Ed Menoche, Josh Ormond, Amy Rosewarne, Andrew Stavisky, and Sarah Veale.
UASs are aircraft that do not carry a pilot aboard, but instead operate on pre-programmed routes or are manually controlled by following commands from pilot-operated ground control stations. Unauthorized UAS operations have, in some instances, compromised safety. The FAA Modernization and Reform Act of 2012 directed FAA to take actions to safely integrate UASs into the national airspace. In response, FAA developed a phased approach to facilitate integration and established test sites among other things. GAO was asked to review FAA's progress in integrating UASs. This report addresses (1) the status of FAA's progress toward safe integration of UASs into the national airspace, (2) research and development support from FAA's test sites and other resources, and (3) how other countries have progressed toward UAS integration into their airspace for commercial purposes. GAO reviewed and analyzed FAA's integration- planning documents; interviewed officials from FAA and UAS industry stakeholders; and met with the civil aviation authorities from Australia, Canada, France, and the United Kingdom. These countries were selected based on several factors including whether they have regulatory requirements for commercial UASs, operations beyond the view of the pilot, and whether non-military UAS are allowed to operate in the airspace. In comments on this report, the Department of Transportation noted that GAO did not address environmental considerations of UAS integration; such considerations were outside the scope of this report. The Federal Aviation Administration (FAA) has progressed toward its goal of seamlessly integrating unmanned aerial system (UAS) flights into the national airspace. FAA has issued its UAS Comprehensive Plan and UAS Integration Roadmap , which provide broad plans for integration. However, according to FAA, it is working with MITRE to develop a foundation for an implementation plan; FAA then expects to enact a plan by December 2015. While FAA still approves all UAS operations on a case-by-case basis, in recent years it has increased approvals for UAS operations. Specifically, the total number of approvals for UAS operations has increased each year since 2010, and over the past year has included approvals for commercial UAS operations for the first time. In addition, FAA has issued a Notice of Proposed Rulemaking that proposes regulations for small UASs (less than 55 pounds). The FAA's six designated test sites have become operational but have had to address various challenges during the process. The designated test sites became operational in 2014, and as of March 2015, over 195 test flights had taken place. These flights provide operations and safety data to FAA in support of UAS integration. In addition, FAA has provided all test sites with a Certificate of Waiver or Authorization allowing small UAS operations below 200 feet anywhere in the United States. However, during the first year of operations, the test sites faced some challenges. Specifically, the test sites sought additional guidance regarding the type of research they should conduct. According to FAA, it cannot direct the test sites, which receive no federal funding, to conduct specific research. However, FAA did provide a list of potential research areas to the test sites to provide some guidance on areas for research. FAA has conducted other UAS research through agreements with MITRE and some universities, and in May 2015 named the location of the UAS Center of Excellence—a partnership among academia, industry, and government conducting additional UAS research. Unlike FAA's agreements with the test sites, many of these arrangements have language specifically addressing the sharing of research and data. Around the world, countries have been allowing UAS operations in their airspace for purposes such as agricultural applications and aerial surveying. Unlike in the United States, countries GAO examined—Australia, Canada, France, and the United Kingdom—have well-established UAS regulations. Also, Canada and France currently allow more commercial operations than the United States. While the United States has not finalized UAS regulations, the provisions of FAA's proposed rules are similar to those in the countries GAO examined. However, FAA may not issue a final rule for UASs until late 2016 or early 2017, and rules in some of these countries continue to evolve. Meanwhile, unlike under FAA's proposed rule, Canada has created exemptions for commercial use of small UASs in two categories that allow operations without a government-issued certification, and France and Australia are approving limited beyond line-of-sight operations. Similar to the United States, other countries are facing technology shortfalls, such as the ability to detect and avoid other aircraft and obstacles, as well as unresolved issues involving limited spectrum that limit the progress toward full integration of UASs into the airspace in these countries.
Since 1996, Congress has taken important steps to increase Medicare program integrity funding and oversight, including the establishment of the Medicare Integrity Program. Table 1 summarizes several key congressional actions. CMS has made progress in strengthening provider enrollment provisions, but needs to do more to identify and prevent potentially fraudulent providers from participating in Medicare. Additional improvements to prepayment and postpayment claims review would help prevent and recover improper payments. Addressing payment vulnerabilities already identified could further help prevent or reduce fraud. PPACA authorized and CMS has implemented new provider enrollment procedures that address past weaknesses identified by GAO and HHS’s Office of Inspector General (OIG) that allowed entities intent on committing fraud to enroll in Medicare. CMS has also implemented other measures intended to improve existing procedures. Specifically, to strengthen the existing screening activities conducted by CMS contractors, the agency added screenings of categories of provider enrollment applications by risk level, contracted with new national enrollment screening and site visit contractors, and began imposing moratoria on new enrollment of certain types of providers. Screening Provider Enrollment Applications by Risk Level: CMS and OIG issued a final rule in February 2011 to implement many of the new screening procedures required by PPACA. CMS designated three levels of risk—high, moderate, and limited—with different screening procedures for categories of Medicare providers at each level. Providers in the high-risk level are subject to the most rigorous screening. Based in part on our work and that of OIG, CMS designated newly enrolling home health agencies and suppliers of durable medical equipment, prosthetics, orthotics, and supplies (DMEPOS) as high risk, and designated other providers as lower risk levels. Providers at all risk levels are screened to verify that they meet specific requirements established by Medicare, such as having current licenses or accreditation and valid Social Security numbers. High- and moderate-risk providers are also subject to unannounced site visits. Further, depending on the risks presented, PPACA authorizes CMS to require fingerprint-based criminal history checks. Last month, CMS awarded a contract that will enable the agency to access Federal Bureau of Investigation information to help conduct those checks of high-risk providers and suppliers. PPACA also authorizes the posting of surety bonds for certain providers. CMS has indicated that the agency will continue to review the criteria for its screening levels and will publish changes if the agency decides to update the assignment of screening levels for categories of Medicare providers. Doing so could become important because the Department of Justice (DOJ) and HHS reported multiple convictions, judgments, settlements, or exclusions against types of providers not currently at the high-risk level, including community mental health centers and ambulance providers. CMS’s implementation of accreditation for DMEPOS suppliers, and of a competitive bidding program, including in geographic areas thought to have high fraud rates, may be helping to reduce the risk of DMEPOS fraud. While continued vigilance of DMEPOS suppliers is warranted, other types of providers may become more problematic in the future. Specifically, in September 2012, we found that a range of providers have been the subjects of fraud investigations. According to 2010 data from OIG and DOJ, over 10,000 providers that serve Medicare, Medicaid, and Children’s Health Insurance Program beneficiaries were involved in fraud investigations, including not only home health agencies and DMEPOS suppliers, but also physicians, hospitals, and pharmacies.In addition, the provider type constituting the largest percentage of subjects in criminal health care fraud investigations was medical facilities—including medical centers, clinics, or practices—which constituted almost a quarter of subjects in such investigations. DMEPOS suppliers make up a little over 16 percent of subjects. National Enrollment Screening and Site Visit Contractors: CMS contracted with two new types of entities at the end of 2011 to assume centralized responsibility for two functions that had been the responsibility of multiple contractors. One of the new contractors is conducting automated screenings to check that existing and newly enrolling providers and suppliers have valid licensure, accreditation, and a National Provider Identifier (NPI), and are not on the OIG list of providers and suppliers excluded from participating in federal health care programs. The second contractor conducts site visits of providers to determine whether sites are legitimate and the providers meet certain Medicare standards. implementation of the PPACA screening requirements, the agency had revoked over 17,000 suspect providers’ ability to bill the Medicare program. Site visits for DMEPOS suppliers are to continue to be conducted by the contractor responsible for their enrollment. In addition, CMS at times exercises its authority to conduct a site visit or request its contractors to conduct a site visit for any Medicare provider or supplier. Moratoria on Enrollment of New Providers and Suppliers in Certain Areas: CMS suspended enrollment of new home health providers and ambulance suppliers in certain fraud “hot spots” and other geographic areas. In July 2013, CMS first exercised its authority granted by PPACA to establish temporary moratoria on enrolling new home health agencies in Chicago and Miami, and new ambulance suppliers in Houston. In January 2014, CMS extended its first moratoria and added enrollment moratoria for new home health agency providers in Fort Lauderdale, Detroit, Dallas, and Houston, and new ground ambulance suppliers in Philadelphia. These moratoria are scheduled to be in effect until July 2014, unless CMS extends or lifts them. CMS officials cited areas of potential fraud risk, such as a disproportionate number of providers and suppliers relative to beneficiaries and extremely high utilization as rationales for suspending new enrollments of home health providers or ground ambulance suppliers in these areas. We are currently examining the ability of CMS’s provider enrollment system to prevent and detect the continued enrollment of ineligible or potentially fraudulent providers in Medicare. Specifically, we are assessing the process used to enroll and verify the eligibility of Medicare providers in Medicare’s Provider Enrollment, Chain, and Ownership System (PECOS) and the extent to which CMS’s controls are designed to prevent and detect the continued enrollment of ineligible or potentially fraudulent providers in PECOS. Although CMS has taken many needed actions, we and OIG have found that CMS has not fully implemented other enrollment screening actions authorized by PPACA. These actions could help further reduce the enrollment of providers and suppliers intent on defrauding the Medicare program. They include issuing a rule to implement surety bonds for certain providers, issuing a rule on provider and supplier disclosure requirements, and establishing the core elements for provider and supplier compliance programs. Surety Bonds: PPACA authorized CMS to require a surety bond for certain types of at-risk providers and suppliers. Surety bonds may serve as a source for recoupment of erroneous payments. DMEPOS suppliers are currently required to post a surety bond at the time of enrollment. CMS reported in April 2014 that it had not scheduled for publication a proposed rule to implement the PPACA surety bond requirement for other types of at-risk providers and suppliers—such as home health agencies and independent diagnostic testing facilities. In light of the moratoria that CMS has placed on enrollment of home health agencies in fraud “hot spots,” implementation of this rule could help the agency address potential concerns for these at-risk providers across the Medicare program. Providers and Suppliers Disclosure: CMS has not yet scheduled a proposed rule for publication for increased disclosures of prior actions taken against providers and suppliers enrolling or revalidating enrollment in Medicare, as authorized by PPACA, such as whether the provider or supplier has been subject to a payment suspension from a federal health care program. Agency officials had indicated that developing the additional disclosure requirements has been complicated by provider and supplier concerns about what types of information will be collected, what CMS will do with it, and how the privacy and security of this information will be maintained. Compliance Program: CMS has not established the core elements of compliance programs for providers and suppliers, as required by PPACA. We previously reported that agency officials indicated that they had sought public comments on the core elements, which they were considering, and were also studying criteria found in OIG model plans for possible inclusion.had not yet scheduled a proposed rule for publication. Medicare uses prepayment review to deny claims that should not be paid and postpayment review to recover improperly paid claims. As claims go through Medicare’s electronic claims payment systems, they are subjected to prepayment controls called “edits,” most of which are fully automated; if a claim does not meet the criteria of the edit, it is automatically denied. Other prepayment edits are manual; they flag a claim for individual review by trained staff who determine whether it should be paid. Due to the volume of claims, CMS has reported that less than 1 percent of Medicare claims are subject to manual medical record review by trained personnel. Increased use of prepayment edits could help prevent improper Medicare payments. Our prior work found that, while use of prepayment edits saved Medicare at least $1.76 billion in fiscal year 2010, the savings could have been greater had prepayment edits been used more widely. Based on an analysis of a limited number of national policies and local coverage determinations (LCD), we identified $14.7 million in payments in fiscal year 2010 that appeared to be inconsistent with four national policies and therefore improper. We also found more than $100 million in payments that were inconsistent with three selected LCDs that could have been identified using automated edits. Thus we concluded that more widespread implementation of effective automated edits developed by individual Medicare administrative contractors (MAC) in other MAC jurisdictions could also result in savings to Medicare. CMS has taken steps to improve the development of other types of prepayment edits that are implemented nationwide, as we recommended. For example, the agency has centralized the development and implementation of automated edits based on a type of national policy called national coverage determinations. CMS has also modified its processes for identifying provider billing of services that are medically unlikely to prevent circumvention of automated edits designed to identify an unusually large quantity of services provided to the same patient. We also evaluated the implementation of CMS’s Fraud Prevention System (FPS), which uses predictive analytic technologies as required by the Small Business Jobs Act of 2010 to analyze Medicare fee-for-service (FFS) claims on a prepayment basis. FPS identifies investigative leads for CMS’s Zone Program Integrity Contractors (ZPIC), the contractors responsible for detecting and investigating potential fraud. Implemented in July 2011, FPS is intended to help facilitate the agency’s shift from focusing on recovering potentially fraudulent payments after they have been made, to detecting aberrant billing patterns as quickly as possible, with the goal of preventing these payments from being made. However, in October 2012, we found that, while FPS generated leads for investigators, it was not integrated with Medicare’s payment-processing system to allow the prevention of payments until suspect claims can be determined to be valid. As of April 2014, CMS reported that while the FPS functionality to deny claims before payment had been integrated with the Medicare payment processing system in October 2013, the system did not have the ability to suspend payment until suspect claims could be investigated. In addition, while CMS directed the ZPICs to prioritize alerts generated by the system, in our work examining the sources of new ZPIC investigations in 2012, we found that FPS accounted for about 5 percent of ZPIC investigations in that year. A CMS official reported last month that ZPICs are now using FPS as a primary source of leads for fraud investigations, though the official did not provide details on how much of ZPICs’ work is initiated through the system. Our prior work found that postpayment reviews are critical to identifying and recouping overpayments. The use of national recovery audit contractors (RAC) in the Medicare program is helping to identify underpayments and overpayments on a postpayment basis. CMS began the program in March 2009 for Medicare FFS. CMS reported that, as of the end of 2013, RACs collected $816 million for fiscal year 2014. PPACA required the expansion of Medicare RACs to Parts C and D. CMS has implemented a RAC for Part D, and CMS said it plans to award a contract for a Part C RAC by the end of 2014. Moreover, in February 2014, CMS announced a “pause” in the RAC program as the agency makes changes to the program and starts a new procurement process for the next round of recovery audit contracts for Medicare FFS claims. CMS said it anticipates awarding all five of these new Medicare FFS recovery audit contracts by the end of summer 2014. Other contractors help CMS investigate potentially fraudulent FFS payments, but CMS could improve its oversight of their work. CMS contracts with ZPICs in specific geographic zones covering the nation. We recently found that the ZPICs reported that their actions, such as stopping payments on suspect claims, resulted in more than $250 million in savings to Medicare in calendar year 2012. However, CMS lacks information on the timeliness of ZPICs’ actions—such as the time it takes between identifying a suspect provider and taking actions to stop that provider from receiving potentially fraudulent Medicare payments—and would benefit from knowing whether ZPICs could save more money by acting more quickly. Thus, in October 2013, we recommended that CMS collect and evaluate information on the timeliness of ZPICs’ investigative and administrative actions. CMS did not comment on our recommendation. We are currently examining the activities of the CMS contractors, including ZPICs, that conduct postpayment claims reviews. Our work is reviewing, among other things, whether CMS has a strategy for coordinating these contractors’ postpayment claims review activities. CMS has taken steps to improve use of two CMS information technology systems that could help analysts identify fraud after claims have been paid, but further action is needed. In 2011, we found that the Integrated Data Repository (IDR)—a central data store of Medicare and other data needed to help CMS program integrity staff and contractors detect improper payments of claims—did not include all the data that were planned to be incorporated by fiscal year 2010, because of technical obstacles and delays in funding. As of March 2014, the agency had not addressed our recommendation to develop reliable schedules to incorporate all types of IDR data, which could lead to additional delays in making available all of the data that are needed to support enhanced program integrity efforts and achieve the expected financial benefits. However, One Program Integrity (One PI)—a web-based portal intended to provide CMS staff and contractors with a single source of access to data contained in IDR, as well as tools for analyzing those data—is operational and CMS has established plans and schedules for training all intended One PI users, as we also recommended in 2011. However, as of March 2014, CMS had not established deadlines for program integrity contractors to begin using One PI, as we recommended in 2011. Without these deadlines, program integrity contractors will not be required to use the system, and as a result, CMS may fall short in its efforts to ensure the widespread use and to measure the benefits of One PI for program integrity purposes. Having mechanisms in place to resolve vulnerabilities that could lead to improper payments, some of which are potentially fraudulent, is critical to effective program management, but our work has shown weaknesses in CMS’s processes to address such vulnerabilities. Both we and OIG have made recommendations to CMS to improve the tracking of vulnerabilities. In our March 2010 report on the RAC demonstration program, we found that CMS had not established an adequate process during the demonstration or in planning for the national program to ensure prompt resolution of vulnerabilities that could lead to improper payments in Medicare; further, the majority of the most significant vulnerabilities identified during the demonstration were not addressed. In December 2011, OIG found that CMS had not resolved or taken significant action to resolve 48 of 62 vulnerabilities reported in 2009 by CMS contractors specifically charged with addressing fraud. We and OIG recommended that CMS have written procedures and time frames to ensure that vulnerabilities were resolved. CMS has indicated that it is now tracking vulnerabilities identified from several types of contractors through a single vulnerability tracking process, and the agency has developed some written guidance on the process. We recently examined that process and found that, while CMS informs MACs about vulnerabilities that could be addressed through prepayment edits, the agency does not systematically compile and disseminate information about effective local edits to address such vulnerabilities. Specifically, we recommended that CMS require MACs to share information about the underlying policies and savings related to their most effective edits, and CMS generally agreed to do so. In addition, in 2011, CMS began requiring MACs to report on how they had addressed certain vulnerabilities to improper payment, some of which could be addressed through edits. We also recently made recommendations to CMS to address the millions of Medicare cards that display beneficiaries’ Social Security numbers, In August which increases beneficiaries’ vulnerability to identity theft.2012, we recommended that CMS (1) select an approach for removing Social Security numbers from Medicare cards that best protects beneficiaries from identity theft and minimizes burdens for providers, beneficiaries, and CMS and (2) develop an accurate, well-documented cost estimate for such an option. In September 2013, we further recommended that CMS (1) initiate an information technology project for identifying, developing, and implementing changes for the removal of Social Security numbers and (2) incorporate such a project into other information technology initiatives. HHS concurred with our recommendations and agreed that removing the numbers from Medicare cards is an appropriate step toward reducing the risk of identity theft. However, the department also said that CMS could not proceed with changes without agreement from other agencies, such as the Social Security Administration, and that funding was also a consideration. Thus, CMS has not yet taken action to address these recommendations. We are currently examining other options for updating and securing Medicare cards, including the potential use of electronic-card technologies. In addition, we and others have identified concerns with CMS oversight of fraud, waste, and abuse in Medicare’s prescription drug program, Part D, including the contractors tasked with this work. To help address potential vulnerabilities in that program, we are examining practices for promoting prescription drug program integrity, and the extent to which CMS’s oversight of Medicare Part D reflects those practices. Although CMS has taken some important steps to identify and prevent fraud, the agency must continue to improve its efforts to reduce fraud, waste, and abuse in the Medicare program. Identifying the nature, extent, and underlying causes of improper payments, and developing adequate corrective action processes to address vulnerabilities, are essential prerequisites to reducing them. As CMS continues its implementation of PPACA and Small Business Jobs Act provisions, additional evaluation and oversight will help determine whether implementation of these provisions has been effective in reducing improper payments. We are investing resources in a body of work that assesses CMS’s efforts to refine and improve its fraud detection and prevention abilities. Notably, we are currently assessing the potential use of electronic-card technologies, which can help reduce Medicare fraud. We are also examining the extent to which CMS’s information system can help prevent and detect the continued enrollment of ineligible or potentially fraudulent providers in Medicare. Additionally, we have a study underway examining CMS’s oversight of fraud, waste, and abuse in Medicare Part D to determine whether the agency has adopted certain practices for ensuring the integrity of that program. We are also examining CMS’s oversight of some of the contractors that conduct reviews of claims after payment. These studies are focused on additional actions for CMS that could help the agency more systematically reduce potential fraud in the Medicare program. Chairman Brady, Ranking Member McDermott, and Members of the Subcommittee, this concludes my prepared remarks. I would be pleased to respond to any questions you may have at this time. For further information about this statement, please contact Kathleen M. King at (202) 512-7114 or kingk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Karen Doran, Assistant Director; Stephen Robblee; Lisa Rogers; Eden Savino; and Jennifer Whitworth were key contributors to this statement. Medicare: Second Year Update for CMS’s Durable Medical Equipment Competitive Bidding Program Round 1 Rebid. GAO-14-156. Washington, D.C.: March 7, 2014. Medicare Program Integrity: Contractors Reported Generating Savings, but CMS Could Improve Its Oversight. GAO-14-111. Washington, D.C.: October 25, 2013. Medicare Information Technology: Centers for Medicare and Medicaid Services Needs to Pursue a Solution for Removing Social Security Numbers from Cards. GAO-13-761. Washington, D.C.: September 10, 2013 Health Care Fraud and Abuse Control Program: Indicators Provide Information on Program Accomplishments, but Assessing Program Effectiveness Is Difficult. GAO-13-746. Washington, D.C.: September 30, 2013. Medicare Program Integrity: Increasing Consistency of Contractor Requirements May Improve Administrative Efficiency. GAO-13-522. Washington, D.C.: July 23, 2013. Medicare Program Integrity: Few Payments in 2011 Exceeded Limits under One Kind of Prepayment Control, but Reassessing Limits Could Be Helpful. GAO-13-430. Washington, D.C.: May 9, 2013. 2013 Annual Report: Actions Needed to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-13-279SP. Washington, D.C.: April 9, 2013. Medicare Fraud Prevention: CMS Has Implemented a Predictive Analytics System, but Needs to Define Measures to Determine Its Effectiveness. GAO-13-104. Washington, D.C.: October 15, 2012. Medicare Program Integrity: Greater Prepayment Control Efforts Could Increase Savings and Better Ensure Proper Payment. GAO-13-102. Washington, D.C.: November 13, 2012. Medicare: CMS Needs an Approach and a Reliable Cost Estimate for Removing Social Security Numbers from Medicare Cards. GAO-12-831. Washington, D.C.: August 1, 2012. Health Care Fraud: Types of Providers Involved in Medicare, Medicaid, and the Children’s Health Insurance Program Cases. GAO-12-820. Washington, D.C.: September 7, 2012. Program Integrity: Further Action Needed to Address Vulnerabilities in Medicaid and Medicare Programs. GAO-12-803T. Washington, D.C.: June 7, 2012. Follow-up on 2011 Report: Status of Actions Taken to Reduce Duplication, Overlap, and Fragmentation, Save Tax Dollars, and Enhance Revenue. GAO-12-453SP. Washington, D.C.: February 28, 2012. Medicare: The First Year of the Durable Medical Equipment Competitive Bidding Program Round 1 Rebid. GAO-12-733T. Washington, D.C.: May 9, 2012. Medicare: Review of the First Year of CMS’s Durable Medical Equipment Competitive Bidding Program’s Round 1 Rebid. GAO-12-693. Washington, D.C.: May 9, 2012. Medicare: Important Steps Have Been Taken, but More Could Be Done to Deter Fraud. GAO-12-671T. Washington, D.C.: April 24, 2012. Medicare Program Integrity: CMS Continues Efforts to Strengthen the Screening of Providers and Suppliers. GAO-12-351. Washington, D.C.: April 10, 2012. Improper Payments: Remaining Challenges and Strategies for Governmentwide Reduction Efforts. GAO-12-573T. Washington, D.C.: March 28, 2012. 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-342SP. Washington, D.C.: February 28, 2012. Fraud Detection Systems: Centers for Medicare and Medicaid Services Needs to Expand Efforts to Support Program Integrity Initiatives. GAO-12-292T. Washington, D.C.: December 7, 2011. Medicare Part D: Instances of Questionable Access to Prescription Drugs. GAO-12-104T. Washington, D.C.: October 4, 2011. Medicare Part D: Instances of Questionable Access to Prescription Drugs. GAO-11-699. Washington, D.C.: September 6, 2011. Medicare Integrity Program: CMS Used Increased Funding for New Activities but Could Improve Measurement of Program Effectiveness. GAO-11-592. Washington, D.C.: July 29, 2011. Improper Payments: Reported Medicare Estimates and Key Remediation Strategies. GAO-11-842T. Washington, D.C.: July 28, 2011. Fraud Detection Systems: Additional Actions Needed to Support Program Integrity Efforts at Centers for Medicare and Medicaid Services. GAO-11-822T. Washington, D.C.: July 12, 2011. Fraud Detection Systems: Centers for Medicare and Medicaid Services Needs to Ensure More Widespread Use. GAO-11-475. Washington, D.C.: June 30, 2011. Improper Payments: Recent Efforts to Address Improper Payments and Remaining Challenges. GAO-11-575T. Washington, D.C.: April 15, 2011. Medicare and Medicaid Fraud, Waste, and Abuse: Effective Implementation of Recent Laws and Agency Actions Could Help Reduce Improper Payments. GAO-11-409T. Washington, D.C.: March 9, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. Improper Payments: Progress Made but Challenges Remain in Estimating and Reducing Improper Payments. GAO-09-628T. Washington, D.C.: April 22, 2009. Medicare: Thousands of Medicare Providers Abuse the Federal Tax System. GAO-08-618. Washington, D.C.: June 13, 2008. Medicare: Competitive Bidding for Medical Equipment and Supplies Could Reduce Program Payments, but Adequate Oversight Is Critical. GAO-08-767T. Washington, D.C.: May 6, 2008. Improper Payments: Status of Agencies’ Efforts to Address Improper Payment and Recovery Auditing Requirements. GAO-08-438T. Washington, D.C.: January 31, 2008. Improper Payments: Federal Executive Branch Agencies’ Fiscal Year 2007 Improper Payment Estimate Reporting. GAO-08-377R. Washington, D.C.: January 23, 2008. Medicare: Improvements Needed to Address Improper Payments for Medical Equipment and Supplies. GAO-07-59. Washington, D.C.: January 31, 2007. Medicare: More Effective Screening and Stronger Enrollment Standards Needed for Medical Equipment Suppliers. GAO-05-656. Washington, D.C.: September 22, 2005. Medicare: CMS’s Program Safeguards Did Not Deter Growth in Spending for Power Wheelchairs. GAO-05-43. Washington, D.C.: November 17, 2004. Medicare: CMS Did Not Control Rising Power Wheelchair Spending. GAO-04-716T. Washington, D.C.: April 28, 2004. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO has designated Medicare as a high-risk program, in part because the program's size and complexity make it vulnerable to fraud, waste, and abuse. In 2013, Medicare financed health care services for approximately 51 million individuals at a cost of about $604 billion. The deceptive nature of fraud makes its extent in the Medicare program difficult to measure in a reliable way, but it is clear that fraud contributes to Medicare's fiscal problems. More broadly, in fiscal year 2013, CMS estimated that improper payments—some of which may be fraudulent—were almost $50 billion. This statement focuses on the progress made and important steps to be taken by CMS and its program integrity contractors to reduce fraud in Medicare. These contractors perform functions such as screening and enrolling providers, detecting and investigating potential fraud, and identifying improper payments and vulnerabilities that could lead to payment errors. This statement is based on relevant GAO products and recommendations issued from 2004 through 2014 using a variety of methodologies. In April 2014, GAO also received updated information from CMS on its actions related to the laws, regulations, and guidance discussed in this statement. Additionally, GAO updated information by examining public documents and relevant policies and procedures. The Centers for Medicare & Medicaid Services (CMS)—the agency within the Department of Health and Human Services (HHS) that oversees Medicare—has made progress in implementing several key strategies GAO identified in prior work as helpful in protecting Medicare from fraud; however, important actions that could help CMS and its program integrity contractors combat fraud remain incomplete. Provider Enrollment : The Patient Protection and Affordable Care Act (PPACA) authorized, and CMS has implemented, actions to strengthen provider enrollment that address past weaknesses identified by GAO and HHS's Office of Inspector General. For example, CMS has hired contractors to determine whether providers and suppliers have valid licenses and are at legitimate locations. CMS also recently contracted for fingerprint-based criminal history checks for high-risk providers and suppliers. CMS could further strengthen provider enrollment by issuing a rule to require additional provider and supplier disclosures of information and establishing core elements for provider and supplier compliance programs, as authorized by PPACA. Prepayment and Postpayment Claims Review : Medicare uses prepayment review to deny claims that should not be paid and postpayment review to recover improperly paid claims. GAO has found that increased use of prepayment edits could help prevent improper Medicare payments. For example, prior GAO work identified millions of dollars of payments inconsistent with selected coverage and payment policies and therefore improper. Postpayment reviews are also critical to identifying and recouping payments. GAO recommended better oversight of both the information systems analysts use to identify claims for postpayment review, in a 2011 report, and the contractors responsible for these reviews, in a 2013 report. CMS has addressed some of these recommendations. Addressing Identified Vulnerabilities : Having mechanisms in place to resolve vulnerabilities that could lead to improper payments is critical to effective program management and could help address fraud. However, GAO work has shown weaknesses in CMS's processes to address such vulnerabilities, placing the Medicare program and its beneficiaries at risk. For example, GAO has made multiple recommendations to CMS to remove Social Security numbers from beneficiaries' Medicare cards to help prevent identity theft, and, while HHS agreed with these recommendations, the department also reported that CMS could not proceed with the changes for a variety of reasons, including funding limitations. Thus, to date, CMS has not taken action on these recommendations. GAO has work underway addressing these key strategies, including assessing the potential use of electronic-card technologies to help reduce Medicare fraud. GAO is also examining the extent to which CMS's information system can prevent and detect the continued enrollment of ineligible or potentially fraudulent providers in Medicare. Additionally, GAO is studying CMS's oversight of program integrity efforts for prescription drugs and is examining CMS's oversight of some of the contractors that conduct reviews of claims after payment. These studies are focused on additional actions for CMS that could help the agency more systematically reduce potential fraud in the Medicare program.
The IRS and tax administrators worldwide generally use similar administrative practices. Information reporting. Information reporting is a widely accepted practice for increasing taxpayer compliance. Under U.S. law, some types of transactions are required to be reported to the IRS by third parties who make payments to, or sometimes receive payments from, individual taxpayers. Typically, information returns represent income paid to the taxpayer, such as wages or bank account interest. After tax returns are filed, the IRS then matches the amounts reported on information returns to the amounts reported on the taxpayer’s return. For any differences, the IRS may send a notice to the taxpayer requesting an explanation. If the taxpayer does not respond to the notice, the IRS may make an additional assessment. For fiscal year 2010, the IRS received over 2.7 billion information returns, sent 5.5 million notices on differences between information returns and tax returns, and assessed an additional $20.7 billion in taxes, interest, and penalties. Withholding. Withholding is a widely accepted practice to increase taxpayer compliance. Under U.S. law, employers must withhold income tax from the wages paid to employees. Withholding from salaries requires wage earners to pay enough tax during the tax year to assure that they will not face a large payment at year end. Also, withholding can be required as a backup to information reporting if a payee fails to furnish a correct taxpayer identification number (TIN). If the payee refuses to furnish a TIN, the payer generally withholds 28 percent of the amount of the payment— for example, interest payments on a bank account—and remits that amount to the IRS. Electronic tax administration. Many tax administrators in the United States and worldwide have increasingly used electronic tax administration to improve services and reduce costs. The IRS has focused its electronic tax administration on filing tax returns over the Internet, providing taxpayer assistance through its Web site, and providing telephone assistance. To accept electronically filed tax returns, IRS has authorized preapproved e-file providers to submit the returns. IRS cannot accept electronically filed returns directly from taxpayers. Through its Web site, IRS provides taxpayers with publications explaining tax law and IRS administrative procedure. The Web site also provides automated services such as “Where’s My Refund?” During the 2010 filing season, the IRS Web site had 239 million total visits and 277 million searches. IRS also received 77 million telephone calls of which IRS phone assistors answered about 24 million calls. Tax enforcement. The U.S. tax system, as well as many other tax systems worldwide, is based on some degree of self-reporting and voluntary compliance by taxpayers. Tax administrations worldwide have enforcement programs to ensure that tax returns are accurate and complete and taxes are paid. Among others, IRS uses two principal enforcement programs. After tax returns have been filed, the Automated Underreporter Program matches data on information returns (usually on income) provided by employers, banks, and other payers of income to data reported on taxpayers’ tax returns. IRS may contact taxpayers about any differences. The Examination Program relies on IRS auditors to check compliance in reporting income, deductions, credits, and other issues on tax returns by reviewing the documents taxpayers provided to support their tax return. IRS, like revenue agencies in many countries, administers tax expenditures. Tax expenditures are tax provisions that grant special tax relief for certain kinds of behavior by taxpayers or for taxpayers in special circumstances. Tax expenditures reduce the amount of taxes owed and therefore are seen as resulting in the government forgoing revenues. These provisions are viewed by many analysts as spending channeled through the tax system. For fiscal year 2010, the U.S. Department of the Treasury reported 173 tax expenditures costing, in aggregate, more than $1 trillion. Tax expenditures are often aimed at policy goals similar to those of federal spending programs, such encouraging economic development in disadvantaged areas and financing postsecondary education. In 2005, we reported that all U.S. federal spending and tax policy tools, including tax expenditures, should be reexamined to ensure that they are achieving their intended purposes and designed in the most efficient and effective manner. The following examples illustrate how New Zealand, Finland, the European Union (EU), the United Kingdom (UK), Australia, and Hong Kong have addressed well-known tax administration issues. Our work does not suggest that these practices should or should not be adopted by the United States. New Zealand, like the United States, addresses various national objectives through a combination of tax expenditures and discretionary spending programs. In New Zealand, tax expenditures are known as tax credits. New Zealand has overcome obstacles to evaluating these related programs at the same time to better judge whether they are working effectively. Rather than doing separate evaluations, New Zealand completes integrated evaluations of tax expenditures and discretionary spending programs to analyze their combined effects. Using this approach, New Zealand can determine, in part, whether tax expenditures and discretionary spending programs work together to accomplish government goals. One example is the Working For Families (WFF) Tax Credits program, which is an entitlement for families with dependent children to promote employment. Prior to the introduction of WFF in 2004, New Zealand’s Parliament discovered that many low-income families were not better off from holding a low-paying job, and those who needed to pay for childcare to work were generally worse off in low-paid work compared to receiving government benefits absent having a job. This prompted Parliament to change its in-work incentives and financial support including tax expenditures. The WFF Tax Credits program differs from tax credit programs in the United States in that it is an umbrella program that spans certain tax credits administered by the Inland Revenue Department (IRD) as well as discretionary spending programs administered by the Ministry of Social Development (MSD). IRD collects most of the revenue and administers the tax expenditures for the government. Being responsible for collecting sensitive taxpayer information, IRD must maintain tax privacy and protect the integrity of the New Zealand tax system. MSD administers the WFF’s program funds and is responsible for collecting data that include monthly income received by its beneficiaries. Given different responsibilities, IRD and MSD keep separate datasets, making it difficult to assess the cumulative effect of the WFF program. Therefore, to understand the cumulative effect of changes made to the WFF program and ensure that eligible participants were using it, New Zealand created a joint research program between IRD and MSD that ran from October 2004 to April 2010. The joint research program created linked datasets between IRD and MSD. Access to sensitive taxpayer information was restricted to IRD employees on the joint research program and to authorized MSD employees only after they were sworn in as IRD employees. The research provided information on key outcomes that could only be tracked through the linked datasets. The research found that the WFF program aided the transition from relying on government benefits to employment, as intended. It also found that a disproportionate number of those not participating in the program faced barriers to taking advantage of the WFF. Barriers included the perceived stigma from receiving government aid, the transaction costs of too many rules and regulations, and the small amounts of aid for some participants. On the basis of these findings, Parliament made changes to WFF that provided an additional NZ$1.6 billion (US$1.2 billion) per year in increased financial entitlements and in-work support to low- to middle-income families. Appendix II provides more details on New Zealand’s evaluation of tax expenditures as well as similarities and differences to the U.S. system. Finland encourages accurate withholding of taxes from taxpayers’ income, lowers its costs, and reduces taxpayers’ filing burdens through Internet- based electronic services. In 2006, Finland established a system, called the Tax Card, to help taxpayers estimate a withholding rate for the individual income tax. The Tax Card, based in the Internet, covers Finland’s national tax, municipality tax, social security tax, and church tax. The Tax Card is accessed through secured systems in the taxpayer’s Web bank or an access card issued by Finland’s government. The Tax Card system enables taxpayers to update their withholding rate as many times as needed throughout the year, adjusting for events that increase or decrease their income tax liability. When completed, the employer is notified of the changed withholding tax rate through the mail or by the employee providing a copy to the employer. According to the Tax Administration, about a third of all taxpayers using the Tax Card—about 1.6 million people—change their withholding percentages at least annually. Finland generally refunds a small amount of the withheld funds to taxpayers (e.g., it refunded about 8 percent of the withheld money in 2007). Finland also has been preparing income tax returns for individuals since 2006. The Tax Administration prepares the return for the tax year ending on December 31st based on third-party information returns, such as reporting by employers on wages paid or by banks on interest paid to taxpayers. During April, the Tax Administration mails the preprepared return for the taxpayer’s review. Taxpayers can revise the paper form and return it to the Tax Administration in the mail or revise the return electronically online. According to Tax Administration officials, about 3.5 million people do not ask to change their tax return and about 1.5 million will request a tax change. Electronic tax administration is part of a governmentwide policy to use electronic services to lower the cost of government and encourage growth in the private sector. According to Tax Administration staff, increasing electronic services to taxpayers helps to lower costs. Overall, the growth of electronic services, according to Finnish officials, has helped to reduce Tax Administration staff by over 11 percent from 2003 to 2009 while improving taxpayer service. According to officials of the Finnish government as well as public-interest and trade groups, the Tax Card and preprepared return systems were established under a strong culture of national cooperation. For the preprepared return system to work properly, Finland’s business and other organizations that prepare information returns had to accept the burden to comply in filing accurate returns promptly following the end of the tax year. Finland’s tax system is positively viewed by taxpayers and industry groups, according to our discussions with several industry and taxpayer groups. They stated that Finland has a simple, stable tax system which makes compliance easier to achieve. As a result, few individuals use a tax advisor to help prepare and file their annual income tax return. Appendix III provides more details on Finland’s electronic tax administration system as well as a discussion of similarities to and differences from the U.S. system. The EU seeks to improve tax compliance through a multilateral agreement on the exchange of information on interest earned by each nation’s individual taxpayers. This agreement addresses common issues with the accuracy and usefulness of information exchanged among nations that have differing technical, language, and formatting approaches for recording and transmitting such information. Under the treaty, called the Savings Taxation Directive, adopted in June 2003, the 27 EU members and 10 other participants agreed to share information about income from interest payments made to individuals who are citizens in another member nation. With this information, the tax authorities are able to verify whether their citizens properly reported and paid tax on the interest income. The directive provides the basic framework for the information exchange, defining essential terms and establishing automatic information exchange among members. As part of the directive, 3 EU member nations as well as the 5 European nonmember nations agreed to apply a withholding tax with revenue sharing (described below) during a transition period through 2011, rather than automatically exchanging information. Under this provision, a 15 percent withholding tax gradually increases to 35 percent by July 1, 2011. The withholding provision included a revenue-sharing provision, which authorizes the withholding nation to retain 25 percent of the tax collected and transfer the other 75 percent to the nation of the account owner. The directive also requires the account owner’s home nation to ensure that withholding does not result in double taxation. The directive does this by granting a tax credit equal to the amount of tax paid to the nation in which the account is located. A September 2008 report to the EU Council described the status of the directive’s implementation. During the first 18 months of information exchange and withholding, data limitations such as incomplete information on the data exchanged and tax withheld created major difficulties for evaluating the directive’s effectiveness. Further, no benchmark was available to measure the effect of the changes. According to EU officials, the most common administrative issues, especially during the first years of implementation of the directive, have been the identification of the owner reported in the computerized format. It is generally recognized that a Taxpayer Identification Number (TIN) provides the best means of identifying the owner. However, the current directive does not require paying agents to record a TIN. Using names has caused problems when other EU member states tried to access the data. For example, a name that is misspelled cannot be matched. In addition, how some member states format their mailing address may have led to data-access problems. Other problems with implementing the directive include identifying whether investors moved their assets into categories not covered by the directive (e.g., shifting to equity investments), and concerns that tax withholding provisions may not be effective because withholding rates were low until 2011 when the rate became 35 percent. The EU also identified problems with the definition of terms, making uniform application of the directive difficult. Generally these terms identify which payments are covered by the directive, who must report under the directive, and who owns the interest for tax purposes. Nevertheless, EU officials stated that the quality of data has improved over the years. The EU officials have worked with EU member nations to resolve specific data issues, which has contributed to the effective use of the information exchanged under the directive. EU officials told us that the monitoring role by the EU Commission, the data-corrections process, and frequent contacts to resolve specific issues have contributed to effective use of the data received by EU member states. Appendix IV provides more details on the EU Saving Taxation Directive and related issues such as avoiding double taxation as well as a discussion of similarities to and differences from the U.S. system. The UK promotes accurate tax withholding and reduces taxpayers’ filing burdens by calculating withholding rates for taxpayers and requiring that payers of certain types of income withhold taxes at standard rates. The UK uses information reporting and withholding to simplify tax reporting and tax payments for individual tax returns. Both the individual taxpayer and Her Majesty’s Revenue and Customs (HMRC)—the tax administrator—are to receive information returns from third parties that make payments to a taxpayer such as for bank account interest. A key element is the UK’s Pay As You Earn (PAYE) system. Under the PAYE system, HMRC calculates an amount of withholding from wages to meet a taxpayer’s liability for the current tax year based on information reporting from the employer and other income information employees may provide. According to HMRC officials, the individual tax system in the UK is simple for most taxpayers who are subject to PAYE. PAYE makes it unnecessary for wage earners to file a yearly tax return, unless special circumstances apply. For example, wage earners do not need to file a return unless income from interest, dividends, or capital gains exceeds certain thresholds or if deductions need to be reported. Therefore, a tax return may not be required because most individuals do not earn enough of these income types to trigger self-reporting. For example, the first £10,100 (US$16,239) of capital gains income is exempt from being reported on tax returns. PAYE also facilitates the reconciliation of tax liabilities for prior tax years through the use of withholding at source for wages. The withheld amount may be adjusted by HMRC to collect any unpaid taxes from previous years or refund overpayments. HMRC annually notifies the taxpayer and employer of the amount to withhold. HMRC also may adjust the withheld amount through information provided by taxpayers. If taxpayers provide the information on their other income such as self-employment earnings, rental income, or investment income, HMRC can adjust the PAYE withholding. Individuals not under the PAYE system are required to file a tax return after the end of the tax year based on their records. In addition, HMRC uses information reporting and tax withholding as part of its two-step process to assess the compliance risks on filed returns. In the first step, individual tax returns are reviewed for inherent compliance risks because of the taxpayers’ income level and complexity of the tax return. For example, wealthy taxpayers with complex business income are considered to have a higher compliance risk than a wage earner. In the second step, information compiled from various sources—including information returns and public sources—is analyzed to identify returns with a high compliance risk. According to HMRC officials, these assessments have allowed HMRC to look at national and regional trends. HMRC is also attempting to uncover emerging compliance problems by combining and analyzing data from the above sources as well as others. Appendix V provides more details on the UK’s information reporting and withholding system as well as a discussion of similarities to and differences from the U.S. system. The Australian High Net Wealth Individuals (HNWI) program focuses on the characteristics of wealthy taxpayers that affect their tax compliance. High-wealth individuals often have complex business relationships involving many entities they may directly control or indirectly influence and these relationships may be used to reduce taxes illegally or in a manner that policymakers may not have intended. The HNWI program requires these taxpayers to provide information on these relationships and provides such taxpayers additional guidance on proper tax reporting. According to the Australian Taxation Office (ATO), in the mid-1990s, ATO was perceived as enforcing strict sanctions on the average taxpayers but not the wealthy. By 2008, ATO found that high-wealth taxpayers, those with a net worth of more than A$30 million (US$20.9 million), had substantial income from complex arrangements, which made it difficult for ATO to identify and assure compliance. ATO concluded that the wealthy required a different tax administration approach. ATO set up a special task force to improve its understanding of wealthy taxpayers, identify their tax planning techniques, and improve voluntary compliance. Due to some wealthy taxpayers’ aggressive tax planning, which ATO defines as investment schemes and legal structures that do not comply with the law, ATO quickly realized that it could not reach its goals for voluntary compliance for this group by examining taxpayers as individual entities. To tackle the problem, ATO began to view wealthy taxpayers as part of a group of related business and other entities. Focusing on control over related entities rather than on just individual tax obligations provided a better understanding of wealthy individuals’ compliance issues. The HNWI approach followed ATO’s general compliance model. The model’s premise is that tax administrators can influence tax compliance behavior through their responses and interventions. For compliant wealthy taxpayers, ATO developed a detailed questionnaire and expanded the information on business relationships that these taxpayers must report on their tax return. For noncompliant wealthy taxpayers, ATO is to assess the tax risk and then determine the intensity of ATO’s compliance interventions. According to 2008 ATO data, the HNWI program has produced financial benefits. From the establishment of the program in 1996 until 2007, ATO had collected A$1.9 billion (US$1.67 billion) in additional revenue and reduced revenue losses by A$1.75 billion (US$1.5 billion) through compliance activities focused on highly wealthy individuals and their associated entities. ATO’s program focus on high-wealth individuals and their related entities has been adopted by other tax administrators. By 2009, nine other countries, including the United States, had formed groups to focus resources on high-wealth individuals. Appendix VI provides more details on Australia’s high-wealth program as well as similarities and differences to the U.S. system. Although withholding of taxes by payers of income is a common practice to ensure high levels of taxpayer compliance, Hong Kong’s Salaries Tax does not require withholding by employers. Instead, tax administrators and taxpayers appear to find a semiannual payment approach effective. Hong Kong’s Salaries Tax is a tax on wages and salaries with a small number of deductions (e.g., charitable donations and mortgage interest). The Salaries Tax is paid by about 40 percent of the estimated 3.4 million wage earners in Hong Kong, while the other 60 percent are exempt from Salaries Tax. Rather than using periodic (e.g., biweekly or monthly) tax withholding by employers, Hong Kong collects the Salaries Tax through two payments by taxpayers for a tax year. Since the tax year runs for April 1st through March 31st, a substantial portion of income for the tax year is earned by January (i.e., income for April to December), and the taxpayer is to pay 75 percent of the tax for that tax year in January (as well as pay any unpaid tax from the previous year). The remaining 25 percent of the estimated tax is to be paid 3 months later in April. By early May, the Inland Revenue Department (IRD)—the tax administrator—annually prepares individual tax returns for taxpayers based on information returns filed by employers. Taxpayers review the prepared return, make any revisions such as including deductions (e.g., charitable contributions), and file it with IRD. IRD then will review the returns and determine if any additional tax is due. If the final Salaries Tax assessment turns out to be higher than the estimated tax previously assessed, IRD is to notify the taxpayer, who is to pay the additional tax concurrently with the January payment of estimated tax for the next tax year. Hong Kong’s tax system is positively viewed by tax experts, practitioners, and a public opinion expert based on our discussions with them. They generally believe that low tax rates, a simple system, and cultural values contribute to Hong Kong’s collection of the Salaries Tax through the two payments rather than periodic withholding. Tax rates are fairly low, starting at 2 percent of the adjusted salary earned and not exceeding 15 percent. Further, tax experts told us that the Salaries Tax system is simple. Few taxpayers use a tax preparer because the tax form is very straightforward and the tax system is described as “stable.” Further, an expert on public opinion in Hong Kong told us that taxpayers fear a loss of face if recognized as not complying with tax law. This cultural attitude helps promote compliance. IRS officials learn about foreign tax practices by participating in international organizations of tax administrators. IRS is actively involved in two international tax organizations and one jointly run program that addresses common tax administration issues. First, the IRS participates with the Center for Inter-American Tax Administration (CIAT), a forum made up of 38 member countries and associate members, which exchange experiences with the aim of improving tax administration. CIAT, formed in 1967, is to promote integrity, increase tax compliance, and fight tax fraud. The IRS participates with CIAT in designing and developing tax administration products and with CIAT’s International Tax Planning Control committee. Second, the IRS participates with the Organisation for Economic Co-operation and Development (OECD) Forum on Tax Administration (FTA), which is chaired by the IRS Commissioner during 2011. The FTA was created in July 2002 to promote dialogue between tax administrations and identify good tax administration practices. Since 2002, the forum has issued over 50 comparative analyses on tax administration issues to assist member and selected nonmember countries. IRS and OECD officials exchange tax administration knowledge. For example, the IRS is participating in the OECD’s first peer review of information exchanged under tax treaties and tax information exchange agreements. Under the peer-review process, senior tax officials from several OECD countries examine each selected member’s legal and regulatory framework and evaluate members’ implementation of OECD tax standards. The peer-review report on IRS information exchange practices is expected to be published in mid-2011. As for the jointly run program, the Joint International Tax Shelter Information Centre (JITSIC) attempts to supplement ongoing work in each country to identify and curb abusive tax schemes by exchanging information on these schemes. JITSIC was formed in 2004 and now includes tax agencies of Australia, Canada, China, Japan, South Korea, the United Kingdom, and the United States. According to the IRS, JITSIC members have identified and challenged the following highly artificial arrangements: a cross-border scheme involving millions of dollars in improper deductions and unreported income on tax returns from retirement account withdrawals; highly structured financing transactions created by financial institutions that taxpayers used to generate inappropriate foreign tax credit benefits; and made-to-order losses on futures and options transactions for individuals in other JITSIC jurisdictions, leading to more than $100 million in evaded taxes. To date, the IRS has implemented one foreign tax administration practice. As presented earlier, Australia’s HNWI program examines sophisticated legal structures that wealthy taxpayers may use to mask aggressive tax strategies. In 2009, the OECD issued a report for a project on the tax compliance problems of wealthy individuals and concluded that “high net worth individuals pose significant challenges to tax administrations” due to their complex business dealings across different business entities, higher tax rates, and higher likelihood of using aggressive tax planning or tax evasion. According to an IRS official, during IRS’s participation in 2008 in the OECD Project, IRS staff began to realize the value of Australia’s program to the U.S. tax system. The IRS now has a program focused on wealthy individuals and their networks. The IRS provided technical comments that are included in this report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after the report date. At that time, we will send copies to the Commissioner of Internal Revenue and other interested parties. This report also will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-9110 or brostekm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IX. For our objective to describe how other countries have approached tax administration issues that are similar to those in the U.S. tax system, we selected six foreign tax administrators. We based our selection of these practices on several factors, including whether the tax administrators had advanced economies and tax systems and the foreign tax administrator’s approach differed, at least in part, from how the United States approaches similar issues. These tax systems also needed to have enough information available in English on their Web site for us to preliminarily understand their tax system and practices. In addition, we considered practices of interest to the requesters. To describe each of the practices, we reviewed documents and held telephone conferences with officials from each tax administrator. We also met with officials of Finland’s government in Helsinki. When possible, we confirmed additional information provided to us by officials to assure that we had a reasonable basis for the data presented. We used official reports published by the tax administrators, such as their annual reports, that are made available to the public on their Internet Web site. To identify taxpayers’ attitudes toward Hong Kong’s semiannual payment system, we interviewed experts who were either university professors, were the authors of publications on Hong Kong’s tax system, or were practitioners in well-known law or accounting firms. To understand the development of Finland’s Internet-based withholding estimation and prepared returns system, we met with the public interest and trade groups that provided assistance to Finland’s Parliament during the system’s development. To describe whether and how the Internal Revenue Service (IRS) identifies and integrates tax administration practices used in other countries, we interviewed IRS officials and reviewed related documents. We also followed up with IRS officials based on any information we found independently about practices that relate to issues in the U.S. tax system and our comparison of U.S. and other administrator’s practices. The descriptive information on the practices of foreign administrators presented in this report may provide useful insights for Congress and others on alternatives to current U.S. tax policies and practices. However, our work did not include the separate analytic step of identifying and assessing the factors that might affect the transferability of the practices to the United States. To adjust foreign currencies to U.S. dollars, we used the Federal Reserve Board’s database on foreign exchange rates. We used rates that matched the time period cited for the foreign amount. For current amounts, we used the exchange rates published for February 25, 2011. If the amounts were for a previous year, we used the exchange rate published for the last business day of that year. For example, if foreign amounts were cited as of 2006, we used exchange rates for December 29, 2006. We did not adjust amounts from previous years for inflation. To help ensure the accuracy of the information we present, we shared a summary of our descriptions with representatives of the six foreign tax administrators and incorporated their comments as appropriate. The IRS provided technical comments that are included in this report. We conducted our work from October 2009 to May 2011 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions in this report. The New Zealand tax system is centralized through the Inland Revenue Department (IRD). Most of New Zealand’s NZ$49 billion (US$35.5 billion) in revenue for fiscal year 2009 was raised by direct taxation that includes PAYE (Pay As You Earn), Company Tax, and Schedular Payments. In addition, tax expenditures (tax credits in New Zealand) for social programs that were administered by IRD in 2009 include KiwiSaver and Working For Families (WFF) Tax Credits programs. The WFF Tax Credits program, started in 2004, seeks to assist low- to middle-income families with the goal of promoting employment and ensuring income adequacy. Prior to 2004, New Zealand had another program intended to assist families. However, the New Zealand government discovered that many low-income families were no better off from holding a low-paying job and that those who needed to pay for childcare to work generally were worse off in low-paid work compared to only receiving government benefits. This prompted the government to change in-work incentives and financial support for families with dependent children. These changes were incorporated into the WFF program in 2004. It was estimated that program costs would increase by NZ$1.6 billion (US$1.2 billion) per year. The WFF Tax Credits program is an umbrella program that spans certain tax credits administered by the IRD as well as discretionary spending programs administered by the Ministry of Social Development (MSD). Table 2 shows the tax and discretionary spending components of the WFF tax credits program and the agency responsible for them. Under the program IRD makes payments to the majority of eligible recipients during the tax year. The IRD and MSD portions of the WFF tax credit program are intended to work together to assist low- to middle- income families and promote employment. Information that IRD collects and uses in administering the tax credits is subject to New Zealand’s protections for the privacy of sensitive taxpayer information contained in the Tax Administration Act. The information that MSD collects and uses is not subject to the same restrictions. To meet their separate needs, IRD and MSD keep separate datasets. New Zealand’s joint research projects integrated research between IRD and other governmental agencies with related programs. The projects were designed to ensure that all disbursements of revenue through either direct program outlays or tax expenditures were administered effectively to meet the goals for social programs, making sure people get the assistance to which they are entitled. One example of joint research was the study of the WFF tax credits program. To overcome the problem of the separate datasets and still protect sensitive tax data, the New Zealand government approved a joint research program that created interagency linked datasets between IRD and MSD. Parliament intended that these linked datasets be used to evaluate the tax expenditures and discretionary spending programs, to ensure that the benefits of the overall program were being fully used by its participants. These linked datasets, known as the “WFF Research Datasets,” were constructed from the combined records of the MSD and IRD. They contained several years of data, and included information about all families who had received a WFF payment during these years. The data included monthly amounts of income received from salary and wages from employment and from the main benefit payments. The linked dataset information was to be used solely to analyze the results of WFF. It could not be used to take any action, whether adverse or favorable, against a particular individual. In 2004, MSD and IRD developed a Memorandum of Understanding (MOU) for the WFF program. The MOU included processes to share information while ensuring that all sensitive data were protected from unauthorized disclosure. The MOU permitted IRD to provide MSD with aggregate taxpayer information needed to conduct evaluations with a restriction that only allows IRD employees direct access to sensitive taxpayer information. However, IRD was authorized to distribute sensitive taxpayer information to authorized MSD employees if they were part of the joint research team and were sworn in as IRD employees. Swearing in MSD agents as IRD agents permitted IRD to apply the same sanctions to IRD and MSD agents who did not adhere to IRD’s data-protection policies. The WFF joint research revealed social and cultural barriers that prevented targeted participants from taking full advantage of the WFF program. These barriers included the perceived stigma from receiving government aid if the person could work or felt that the aid infringed on independence or self-sufficiency; transaction costs from accepting government aid such as taking time off from work, arranging childcare, or following many rules and regulations; low value of applying for the program when the person was close to the maximum eligibility threshold; and geographic barriers when the person lived in areas that were remote or had no transportation, telephone, or Internet. The WFF joint research provided information needed to identify the population that benefited from the program and reduce some of the barriers that kept recipients, particularly an indigenous population, from participating in the target program. Since the inception of the WFF program in 2004, the joint research documented the following benefits from reducing barriers to the targeted population: The percentage of single parents working 20 hours or more increased from 48 percent in June 2004 to 58 percent in June 2007. This represents 8,100 additional single parents in the workforce. The number of single parents receiving benefits from MSD fell by 12 percent from March 2004 to March 2008. Those that received the benefits did so for a shorter time and stayed off the benefit programs longer. While structural differences exist between the New Zealand and U.S. tax systems, both systems use tax expenditures (i.e., tax credits in New Zealand). Unlike the United States, New Zealand has developed a method to evaluate the effectiveness of tax expenditures and discretionary spending programs through joint research that created interagency linked datasets. New Zealand did so while protecting confidential tax data from unauthorized disclosure. In 2005, we reported that the United States had substantial tax expenditures but lacked clarity on the roles of the Office of Management and Budget (OMB), Department of the Treasury, IRS, and federal agencies with discretionary spending programs to evaluate the tax expenditures. Consequently, the United States lacked information on how effective tax expenditures were in achieving their intended objectives, how cost- effectively benefits were achieved, and whether tax expenditures or discretionary spending programs worked well together to accomplish federal objectives. At that time, OMB disagreed with our recommendations to incorporate tax expenditures into federal performance management and budget review processes, citing methodological and conceptual issues. However, in its fiscal year 2012 budget guidance, OMB instructed agencies, where appropriate, to analyze how to better integrate tax and spending policies that have similar objectives and goals. Finland’s national and municipal governments as well as local church councils levy taxes. Nationally, 39 percent of all taxes are paid under individual and corporate income taxes and a capital gains tax. Taxes on goods, services, and property total about 33 percent of revenue; most of this revenue is from the Value Added Tax (VAT). The final 28 percent comes from social security taxes (e.g., national health insurance system and employment pension insurance). Finland’s individual income tax is levied on a graduated rate schedule with four tax brackets, ranging from 7.0 percent to 30.5 percent for incomes over €64,500 (US$92,441) with the tax on investment income levied at a flat rate of 28 percent in 2009. Finland’s corporate income tax is levied at a flat rate of 26 percent. Under the municipal tax, each municipal council sets its tax rate annually. For 2009, municipal taxes are levied at flat rates ranging from 16.5 percent to 21.0 percent of earned income and averaging 18.6 percent. Individuals who are members of the Evangelical-Lutheran Church or the Orthodox Church pay a church tax. For 2009, local church communities determine the rate of tax, which is levied at a flat rate between 1 and 2 percent. Using electronic means, Finland helps taxpayers in estimating their tax withholding and by preparing an income tax return for each individual taxpayer based on third-party information returns. The on-line Tax Card system, established in 2006, is an Internet-based system to help Finnish taxpayers estimate the withholding rate for individual income tax. The Tax Card covers national taxes, municipality tax, social security tax, and church tax. Taxpayers access the Tax Card through the Web sites of their bank or the Finland Tax Administration. Using the Tax Card system, taxpayers can update their withholding rate as many times as needed throughout the year to adjust for events that increase or decrease their potential tax liability. For example, if the taxpayer takes a job with a higher salary, the taxpayer can estimate the change on his or her income tax liability by using the Tax Card system. Taxpayers enter information provided by the employer, based on payroll information, to estimate their adjusted withholding. Annually, 1.6 million taxpayers, about a third of those using the Tax Card, change their tax withholding rate. When the Tax Card has been completed, employees provide the withholding tax rate to their employer through regular mail or in person. If the employer is not notified of any withholding rate, the employer must withhold at the top marginal rate in Finland for all types of taxes—which is 60 percent of gross pay. Employers manually enter the withholding rate into their payroll systems. According to Tax Administration officials, some social benefits can complicate the estimation of the tax due and may not be accurately estimated during the tax year. For example, Finland has a deduction for the cost of travel between a residence and work. If the taxpayer does not accurately estimate the deductions or make changes as the year progresses, the Tax Card withholding rate will be inaccurate. Finland has been operating a tax-return preparation system since 2006. The Finnish Tax Administration prepares an income tax return for each individual taxpayer based on third-party information returns. According to Tax Administration officials, Finland uses information from over 30 types of information-return filers (e.g., employers, banks, and securities brokers). Tax Administration officials said that they have found very little misreporting on the information returns used to prepare the tax returns. They use many ways to try to verify the information. Some taxpayers will correct information returns when reviewing their prepared tax returns. Third parties can be penalized for inaccurate information and Finnish tax officials said those penalties are regularly assessed. The system prepares the return each tax year, which ends on December 31. According to Tax Administration officials, the individual tax returns are mailed for review during April. The taxpayer has until May to make changes to the paper return. Taxpayers can mark up the paper return for revisions and mail it to the Tax Administration whose staff keys in or electronically scans in the changes. Also, taxpayers can choose to make the changes to the return online, using the taxpayer’s account with the Tax Administration. According to Tax Administration officials, typically about 3.5 million people do not ask to change their tax return and about 1.5 million request a tax change. About 400,000 taxpayers will revise their return using the Tax Administration’s Internet portal. Typically, the average taxpayer takes about half an hour to do the adjustments online. One deduction, the commuting adjustment, is not reported on an information return. This adjustment accounts for changes to about 800,000 prepared returns. Overall, taxpayers need to show some proof to support the change to the prepared return, including any changes they make to the information returns the Tax Administration used to prepare the returns. For example, taxpayers showing deductions for mortgage interest that were not reported on information returns would need to show they own a house and the mortgage interest was paid. Or, if an information return reports interest as income, but taxpayers deduct the interest as paid on a loan, the taxpayers need to document the reason for their deduction claims. Finland does not prepare tax returns if individual taxpayers have business income. Rather, these taxpayers must file tax returns based on the data or business records that they maintained. However, some of these taxpayers with business income may get a partially prepared return on personal income and deductions based on third party information on their wages and other personal income in Finland’s prepared return system. All businesses operating in Finland must register with the government. Providing enhanced electronic services has been widely recognized in Finland as an approach for improving taxpayer service while reducing costs. Electronic services provide taxpayers with constant access to assistance regardless of the time of day or distance from the tax administration office. According to Tax Administration officials, electronic systems that provide routine taxpayer assistance allows Tax Administration staff to respond to more complex taxpayer problems. Finland also moved to electronic tax administration to support national policies. As a national policy to encourage economic growth, Finland seeks to have a large private-sector workforce. According to an official of the Finland’s government, a large number of citizens are nearing retirement. Thus, the government is seeking to reduce its workforce so that more workers are available for the private sector. To achieve this goal, Finland focused on making the delivery of government programs more efficient by using more electronic transactions. Another reason for electronic tax administration was to provide equal access to government services. Finnish law requires all e-services to be accessible to all Finnish citizens. With a significant segment of its population living in remote regions, according to officials, improving e- government provides more equal access to government services. To encourage equal access and use of the Internet for delivering services, Finland established standard speeds of Internet access in July 2009. Finland’s tax system is viewed positively by taxpayers and industry groups. Members of several industry and taxpayer groups told us that Finland has a simple, stable tax system, which makes compliance relatively easy. They also commented that the Tax Card and preprepared annual return system work well and are easy to use. As a result, few individuals use a tax advisor to help prepare and file the annual income tax return. We were told that individuals using a tax advisor have complex tax issues, such as from owning a businesses or having complex investments. Electronic tax administration has advantages for the government and Finnish taxpayers. According to tax officials, cost savings result from spending less time to prepare and process tax returns. These officials said that electronic tax administration has helped to reduce their full-time- equivalent positions over 11 percent from 2003 to 2009. Further, the tax withholding system results in a small amount of individual income tax withheld that needs to be refunded after final returns are filed. For tax year 2007, 8 percent of the tax withheld was refunded to taxpayers as compared to 26 percent refunded in the United States. Finland’s culture of cooperation and the resulting cooperative arrangements between government, banks, businesses, and taxpayers have led to acceptance of the new Tax Card Online service. According to public interest and trade groups in Finland, the Finnish society has a great deal of confidence in the banking system and its secure access. This confidence influenced the decision to place the Tax Card online service on bank Web sites. With taxpayers having regular access to a banking Web site, the banks offer a channel for delivering government services, according to government officials. Public interest and trade groups agreed, noting that the banking industry’s willingness to support the Tax Card enhanced its development. Representatives of a Finnish banking trade group said that placing the Tax Card system on their Web sites helped banks. That is, the more time customers spend on banks’ Web sites, the more opportunities the banks had to offer other services, helping to offset the cost of implementing the system. According to Finnish trade and public interest groups, Finland’s cooperative culture also supports the preprepared individual income tax return system. For this system to work properly, business and other organizations must file accurate information returns within 1 month after the end of the tax year. This short period for filing information returns creates some burden. The burden includes costs to purchase and install special software for collecting the information as well as preparing and filing the returns. According to a professional accounting organization in Finland, buying the yearly software updates can be expensive. Any update has to be available well before the start of the tax year so that transactions can be correctly recorded at the start of the year and not revised at the end of the year. In contrast to Finland’s self-described “simple” system, the U.S. tax system is complex and changing annually. Regarding withholding estimation, Finland’s Tax Card system provides taxpayers an online return system for regularly updating the tax amount withheld. For employees in the United States, the IRS’s Web site offers a withholding calculator to help employees determine whether to contact their employer about revising their tax withholding. Finland’s system prepares a notice to the employer that can be sent through the mail or delivered in person, whereas in the United States the taxpayers must file a form with the employer on the amount to be withheld based on the taxpayers’ estimation. In the United States, individual income tax returns are completed by taxpayers—not IRS—using information returns mailed to their homes and their own records. Taxpayers are to file an accurate income tax return by its due date. Unlike in Finland, U.S. individual taxpayers heavily rely on tax advisors and tax software to prepare their annual return. In the United States, about 90 percent of individual income tax returns are prepared by paid preparers or by the taxpayer using commercial software. In June 2003, the European Union (EU) adopted the Savings Taxation Directive to encourage tax compliance by exchanging information and in some cases using withholding. The directive is a multilateral agreement that establishes uniform procedures and definitions for exchanging information and facilitating the resolution of common technical problems. Under the directive, the 27 EU members and 10 dependent and associated territories agreed to participate in the directive. With this information, tax authorities in the citizen’s nation are able to verify whether the citizen properly reported and paid tax on the interest income. Each of the 27 member nations has a separate tax system, and varies in the tax rates imposed on personal income, as shown in table 3. The highest personal income tax rates range from 10 percent in Bulgaria to over 56 percent in Sweden. This range of tax rates is an important reason for the need for the exchange of information on income. Residents in higher-tax countries could be motivated to move capital outside of the country of residence to potentially avoid reporting income earned on investments of the capital. The directive provided a basic framework for information exchanges, defining essential terms such as beneficial owner of the asset paying interest, identity and residence of the owner, paying agents, interest payments, and information to be reported, and establishing automatic information exchange among members. The directive also states that five other nonmember nations agreed to information exchange upon request for information defined under the Savings Taxation Directive. During a transition period from 2005 through 2011, Belgium, Luxembourg, and Austria as well as the five nonmember nations, and six associated territories, agreed to a withholding tax. Under these agreements, a withholding tax was to be remitted at the rate of 15 percent during the first 3 years, 20 percent for the next 3 years, and 35 percent thereafter. The directive authorizes the withholding nations to retain 25 percent of the tax collected and transfer 75 percent of the revenue to the account owner’s home nation. The withholding nations may develop procedures so that the owners can request that no tax be withheld. These procedures generally require that the owner provide identification information to the paying agent or to the account owner’s home nation. The directive also requires the account owner’s home nation to ensure that the withholding does not result in double taxation. The home nation is to grant a tax credit equal to the amount of tax paid to the nation in which the account is located. If the tax paid exceeds the amount due to the home nation, the home nation is to refund to the account owner the excess amount that was withheld. The role of the EU Commission is to coordinate among the participants in the directive. The commission sets up and maintains contact points for communications among members. All information to be exchanged must be submitted no later than June 13 each year to the commission and follow the standardized Organisation for Economic Co-operation and Development (OECD) format. The information exchange is completely electronic and automatic. All information is sent and received through a secure network that only member countries can access. As of 2010, all member countries are using this standard format except for Switzerland which is working with the EU on plans for information exchange. The commission is to keep the format updated and periodically review compliance by member countries. The commission is to gather statistics to measure overall performance and success of the directive. Member countries have agreed to provide the commission with the statistics necessary to gauge performance. Every 2 years the commission hosts a conference to receive feedback from member nations on its performance and to gauge the directive’s success. Additionally, every 3 years the commission reports to the European Parliament and Commission of the European Communities. The first report on the operation of the directive was issued in September 2008. The EU adopted the Savings Taxation Directive to encourage tax compliance by exchanging information and using withholding. Using a multilateral agreement provided a way to uniformly establish procedures and definitions for exchanging information as well as for resolving any common technical problems to information exchange across the entire EU. The September 2008 report to the EU described the status of the directive. The report found that 25 members started applying the rules as required in July 2005. In 2006, the first full year in which data were available, 17 members provided information to the exchange. Bulgaria and Romania began implementation in January 2007. The report concluded that the largest economies and financial centers reported the highest amounts of interest paid to other EU citizens. For 2006, Germany, France, Ireland, and the Netherlands accounted for over 98 percent of the dollar value of interest paid by all EU nations to citizens of other EU countries. The report concluded that data limitations created major difficulties for evaluating the effectiveness of the directive. The EU did not have information on withholding results or time-series information from before the directive began. Without this information, the EU had no benchmark to measure the effect of the changes. According to EU officials, the most common administrative difficulties have been information-technology system problems. Some members have not had the data formatted correctly, which caused problems when other member nations tried to access the data. For example, how member countries format their mailing address has led to data access problems. To overcome this problem, most member countries insert the taxpayer’s mailing address in the free text field, but this makes the data difficult to efficiently analyze by other nations. Another example has been accessing data from languages that have special diacritical marks or characters. When information exchanged included these special characters, an error was created during the data importation process. The directive has suffered from other implementation problems, as follows. Investor behavior. EU staff said the commission tried to measure changes in the different types of investments before and after implementation of the directive. The commission had difficulty in identifying the overall effect the directive has had on individual investment choices because the data used are generally limited to interest-bearing investments. On the basis of decreases in some investor’s total interest savings, the report noted that investors appeared to change their investments before implementation to investments that were not covered by the directive. Withholding. The effectiveness of the withholding system under the Saving Tax Directive is unclear. The report found that the 14 countries and dependent and associated territories applying the withholding provisions in 2006 shared €559.12 million (US$738 million) withheld on income earned in their nation with the account owner’s home nation. Some articles have commented that given the low withholding rates in the early years, taxpayers with higher tax rates in their home nation may have chosen not to report the income. Definitions. The EU identified problems with the definition of terms, making uniform application of the directive difficult. First, the commission’s report raised questions about consistency of coverage of payments made from life insurance contracts where investments were made in securities or funds. Second, confusion existed over whether some paying agents were covered by EU rules on investment managers or by the definition established under the directive for noncovered paying agents. Third, identifying the account owners was another problem. In general, the EU report suggests that improved monitoring and follow-up by the home nation can help locate paying agents in third counties and ensure accurate information on the citizen who owns the account. The EU is considering several solutions such as enforcing existing customer due diligence rules that are to be used by domestic paying agents, who would transmit interest payments to the owners. These rules require that the paying agents must know who they are paying and should not facilitate transactions to mask the owner(s) and avoid taxes or other legal requirements. Nevertheless, EU officials stated that the quality of data has improved over the years. The EU officials have worked with EU member nations to resolve specific data issues, which has contributed to the effective use of the information exchanged under the directive. Generally, unlike the EU multilateral directive, the United States establishes bilateral information-sharing agreements. Those agreements allow for automatic information exchange, but definitions of terms, technical standards, and other matters are not worked out and adopted multilaterally. Resolution of some of those issues may be facilitated by the United States’ participation in the Convention on Multilateral Administrative Assistance in Tax Matters, which includes exchange of information agreement provisions and has been ratified by 15 nations and the United States. The United Kingdom’s (UK) main sources of tax revenue are income tax, national insurance contributions, value added tax, and corporate tax. Her Majesty’s Revenue and Customs (HMRC) also administers taxes assessed for capital gains, inheritance, various stamp duties, insurance premium tax, petroleum revenue, and excise duties. The income tax system— where the tax year runs from April 6 through April 5 —taxes individuals on their income from various sources, for example, employment earnings, self-employment earnings, and property income. Taxable individuals under 65 years of age receive a tax-free personal allowance (£6,475, or US$10,410 for the 2010-11 tax year). If their total income is below the allowance amount, no tax is payable. The three main individual income tax rates for income above the personal allowance are 20 percent (£0-£37,400 or up to US$60,132), 40 percent (£37,401-£150,000 or up to US$241,170), and 50 percent (over £150,000 or over US$241,170). HMRC uses 3 payment systems to collect income tax from individual taxpayers, depending on the type of income and whether the individual is employed, self-employed, or retired. Pay As You Earn (PAYE) is used to withhold tax on wages and salaries paid to individuals by employers. Employers are required to notify HMRC every time an employee starts or stops working for them. Then, HMRC determines a tax withholding code for each individual and employers use the tax codes, in conjunction with tax tables, to calculate the amount of tax to be deducted. Self-assessment tax returns are used by some employees with higher rates of income or complicated tax affairs and by self-employed individuals with different kinds of business income. At-source collection is when the tax, such as on interest and dividend income, is withheld at source when the income is paid. For example, tax is deducted from bank interest as it is credited to an individual. According to HMRC officials, the majority (68 percent) of taxpayers pay their tax solely through the PAYE system without having to submit a return to HMRC. Other actions have helped remove a large number of taxpayers from submitting a return. For example, the UK requires that tax on some income paid to individuals (such as bank interest) be withheld at a 20 percent rate and remitted to HMRC by the payer, and capital gains income up to the first £10,100 (US$16,239) is exempted from tax. The UK also is working towards burden reduction for the average taxpayer by simplifying the tax return. For example, according to HMRC officials, information that is not necessary has been removed from the return to reduce the return filing burden, and those taxpayers who are required to file a return find it straight-forward. HMRC uses data from information reporting and withholding under the PAYE system to simplify the reporting of tax liability on income tax returns for individuals. PAYE adjusts income tax withheld so that the individual’s tax liability is generally met by the end of the tax year. Information reporting helps HMRC and the individual taxpayer determine the total income tax liability, according to HMRC officials. Information returns are to report tax-related transactions by the taxpayer. They are to be supplied by banks and local governments to the taxpayer and HMRC at the end of the tax year. For example, banks are to provide interest payment information. Over 400 local government organizations are to report information on payments made to small businesses. Local government as both an employer and contractor must report information on payments made to others. The information provided by employers enables HMRC to update the employee’s tax record and issue a tax code to the new employer to start the withholding against employee earnings. HMRC calculates the PAYE code using information about the previous year’s income or other employment in the current tax year. Employers are to match the PAYE code to a tax table, which shows how much tax to withhold each pay period. The employer has to remit the withheld tax to HMRC on a monthly or quarterly basis to fulfill the taxpayer’s tax liability. HMRC annually reviews taxpayer records and issues updated PAYE codes before the start of the tax year for employers to operate at the start of the tax year. The individual will receive a notice showing how the tax code has been calculated. To maintain taxpayer confidentiality, the employer will only receive the tax code itself. HMRC can refund income tax overpayments or collect underpayments for previous tax years through adjustments to the PAYE code. HMRC reported in 2010 that around 5 million individuals overpaid or underpaid these taxes. HMRC officials said that they use information returns to help determine these adjustments under PAYE. In lieu of having their PAYE codes adjusted, taxpayers may receive a onetime refund of the overpayment or pay the underpaid amount in one lump sum. Taxes owed usually are collected through code adjustments as long as the taxpayer stays within the PAYE system. HMRC also uses information reporting and withholding to assess the compliance risks on filed returns. In assessing compliance risks, HMRC is attempting to identify underpaid and overpaid tax. The majority of the information for risk assessment is collected centrally from information returns, tax withholdings, filed tax returns, and public sources. This information is mined for risks by special risk-assessment teams. According to HMRC officials, the outcomes of such mining are to be used to verify tax compliance. If low compliance is found, risk specialists are to develop programs to increase compliance. The data mining uses electronic warehouse “Data Marts” that HMRC has had for about 10 years. They have been configured with subsets of data and have been supplemented by sophisticated analysis tools for doing risk assessments. For example, an analyst can create reports to assess the risk for all self-assessment income tax returns where the legal expense is above a specified amount. HMRC officials told us that Data Marts had recently been revamped and a strategic capability was added that links related information such as a business that files a corporate tax return for its business profits, pays value added tax, and has directors who submit self-assessment returns. According to HMRC officials, the use of Data Marts combined with their more recent Strategic Risking Capability has allowed them to assess risks at the national and regional levels. HMRC officials said that they have moved towards national risk assessments because risk has not proven to be geographically based at regional levels. HMRC officials noted that while a return is being assessed for one type of risk, another type of risk can be found. HMRC is attempting to uncover emerging compliance risks by combining and reviewing data from the various sources in the Data Marts and elsewhere. The risk assessment process has two steps, resulting in identifying tax returns for examination. The first step is to identify tax returns that have an inherent risk because of the taxpayers’ size, complexity of the tax return, and past tendency for noncompliance. For example, returns filed by high-wealth individuals are viewed as risky returns that are sent to a related specialty office. The second step assesses risk on returns that are not sent to a specialty office. HMRC officials said that a relatively large proportion of the risk-assessment effort focuses on the self-employed, who are seen as having the greatest risk for tax noncompliance since they usually are not under the PAYE system (unless they have some wage income) and instead are to file a self-assessment return. HMRC has separate risk-assessment approaches, depending on the type of individual taxpayer, as discussed below. For individuals under the PAYE system, HMRC’s computers capture most of the necessary data and the system carries out routine checks to verify data and link it to the taxpayer record. A risk to the PAYE system arises when employees receive benefits from their employers that are not provided to HMRC at the time it determines the annual tax code. Employer benefits may include a car, health insurance, or professional association fees that employers report on information returns after the tax year and that may be subject to income tax. If these benefits received are not included in the tax code then an underpayment of tax is likely to arise. The unpaid tax can be recovered by an annual reconciliation or when the employee reports the benefits on the employee’s self-assessment tax return. Individuals not under the PAYE system are required to file a self- assessment tax return. To assess risk, HMRC checks some self-assessment tax returns for consistency by comparing them to returns from previous years, focusing on small businesses. For example, if the legal expense jumped from £5,000 to £100,000 (US$8,039 to US$160,780) over 2 years, HMRC may decide to review the reason. HMRC permits any self-employed small business with gross receipts of less than £68,000 (US$109,330) to file a simple three-line tax return to report business income, expenses, and profit. HMRC officials said that the threshold allows over 85 percent of all self-employed businesses to file simplified returns with less burden. According to HMRC officials, their policy is to collect as much data as possible up front through information returns, and correct the amounts of tax due with the PAYE system, facilitating the payment of tax liabilities. Since information is shared with HMRC, taxpayers are likely to voluntarily comply if they have to file a tax return. Further, data from information reporting and withholding are to help simplify or eliminate tax reporting at the end of a tax year. According to HMRC officials, the PAYE system makes it unnecessary for most wage earners to file an annual self- assessment tax return. HMRC conducts risk assessments because staff cannot check every tax return in depth due to the large number of taxpayers and the need to lower the costs of administering the tax system. Data from information reporting and withholding provide consistent sources for doing risk assessments. HMRC officials said the income tax system has been simplified because most individual taxpayers fall under the PAYE system, which generally relieves them of the burden of filing a tax return. Even so, some implementation problems have occurred. The House of Commons identified problems with an upgrade to the PAYE information system in 2009-10. The upgrade was to combine information on individuals’ employment and pension income into a single record to support more accurate tax withholding codes and reduce the likelihood of over- and underpayments of tax. However, software problems delayed processing 2008-09 PAYE returns for a year. In addition, data-quality problems from the upgraded PAYE system for 2010-11 generated about 13 million more annual tax coding notices than HMRC had anticipated and some were incorrect or duplicates. With these problems, of the 45 million PAYE records to be reconciled, 10 million cases needed to be reconciled manually. The House of Commons reported a backlog of cases before the PAYE system was upgraded. Limitations of the previous PAYE system and increasingly complex working patterns have made it difficult to reconcile discrepancies without manual intervention. As of March 2010, a backlog of PAYE cases affected an estimated 15 million taxpayers from 2007-08 and earlier; the backlog included an estimated £1.4 billion (US$2.25 billion) of tax underpaid and £3 billion (US$4.82 billion) of tax overpaid. HMRC has reported that risk assessment has provided three benefits: (1) improved examination decisions to ensure that they are necessary and reduce the burden on compliant taxpayers; (2) tailored examinations to the risk in question; and (3) deterred taxpayers from concealing income. HMRC’s risk-assessment approach has increasingly focused on providing help and support to individuals and smaller businesses to voluntarily comply up front. To minimize the need for examinations, HMRC aims to help larger businesses achieve greater and earlier certainty on their tax liabilities. HMRC’s sharper focus on risk assessment means that businesses with reliable track records of managing their own tax risks and being open in their dealings with HMRC benefit from fewer HMRC examinations while those with the highest risks can expect a more robust challenge from dedicated teams of specialists. The UK and United States both have individual income tax returns and use information reporting and tax withholding to help ensure the correct tax is reported and paid. However, differences exist between the countries’ systems. The United States has six tax rates that differ among five filing statuses for individuals (i.e., single, married, married filing separately, surviving spouse, or head of household) and covering all types of taxable income. In general, the UK system has three tax rates, one tax status (individuals), and a different tax return depending on the taxable income (e.g., self- employed or employed individuals). U.S. income tax withholding applies to wages paid but not interest and dividend income as it does in the UK. U.S. wage earners, rather than the Internal Revenue Service, are responsible for informing employers of how much income tax to withhold, if any, and must annually self-assess and file their tax returns, unlike most UK wage earners. Another major difference is that the United States automatically matches data from information returns and the withholding system to data from the income tax return to identify individuals who underreported income or failed to file required returns. Matching is done using a unique identifier taxpayer identification number (TIN). HMRC officials told us that they have no automated document-matching process and the UK does not use TINs as a universal identifier, which is needed for wide-scale document matching. The closest form of unique identifier in the UK is the national insurance number. HMRC officials said they are barred from using the national insurance number for widespread document matching. Instead, HMRC officials said that they may do limited manual document matching in risk assessments and compliance checks. For example, HMRC manually matches some taxpayer data—such as name, address, date of birth—from bank records to data on tax returns. Australia has a federal system of government with revenue collected at the federal, state, and local levels. For 2009-2010, about 92 percent of federal revenue was collected from taxes rather than nontax sources, like fees. The principal source of federal revenue for Australia is the income tax, which accounted for about 71 percent. Australia’s state and local governments rely on grants from the national government and have limited powers to raise taxes. The states receive significant financial support from the federal government. In 2009-10, total payments to the states were 28 percent of all federal expenditures. Individuals accounted for about 65 percent of the 2009-2010 income tax revenue. The system is progressive with tax rates up to 45 percent for taxable income in excess of A$180,000 (US$161,622). In 2007-2008, a small proportion of Australian taxpayers paid a large proportion of Australian taxes, as shown in figure 1. The Australian High Net Wealth Individuals (HNWI) program focuses on the characteristics of wealthy taxpayers that affect their tax compliance. According to the Australian Taxation Office (ATO), in the mid 1990s, it was perceived as enforcing strict sanctions on the average taxpayers but not the wealthy. ATO found that high-wealth taxpayers, those with a net worth of more than A$30 million (US$20.9 million), tend to have complex business arrangements, which made it difficult for ATO to identify and assure compliance. ATO concluded that the wealthy required a different tax administration approach. ATO set up a special task force to improve its understanding of wealthy taxpayers, identify their tax planning techniques, and improve voluntary compliance. Initially, the program focused on the tax return filed by a wealthy individual. Due to some wealthy taxpayers’ aggressive tax planning, which ATO defines as investment schemes and legal structures that do not comply with the law, ATO quickly realized that it could not reach its goals for voluntary compliance for this group by examining taxpayers as individual entities. To tackle the problem, ATO began to view wealthy taxpayers as part of a group of related business and other entities. Focusing on control over related entities rather than on just individual tax obligations provided a different understanding of wealthy individuals’ compliance issues. To address the special needs of the wealthy, ATO developed publications that included a separate high-wealth income tax return form, a questionnaire on the wealthy as an entity and a tax guide, Wealthy and wise—A tax guide for Australia’s wealthiest people. According to ATO, a number of factors led to the HNWI program. First, ATO was dealing with a perceived public image that it showed preference to the wealthy while enforcing strict sanctions on average taxpayers during the 1990s. Second, ATO was perceived as losing revenue from noncompliant taxpayers. Third, high-wealth individuals used special techniques to create and preserve their income and wealth through a “business life cycle.” The cycle includes creating, maintaining, and passing on wealth through complex tax shelters. For example, businesses owned or controlled by wealthy individuals are more likely to have more diverse businesses arrangements, which tend to spread wealth across a group of companies and trusts. Each of these groups controlled by wealthy individuals was classified as a separate taxpayer entity, which made understanding the tax implications of these networks of entities difficult for the ATO. The HNWI approach followed ATO’s general compliance model. The model’s premise is that tax administrators can influence tax compliance behavior through their responses and interventions. Since taxpayers have different attitudes on compliance, ATO used varied responses and interventions tailored to promote voluntary tax compliance across different taxpayer groups. The first part of the standard model is to understand five factors that influence taxpayer compliance. The factors are Business, Industry, Social, Economic, and Psychological. For example, the Business factor included the size, location, nature, and capital structure of the business as well as its financial performance—all of which help ATO understand why compliance or noncompliance occurs. The second part of the model involves taxpayers’ attitudes on compliance. It refers to one of four attitudes that a taxpayer may adopt when interacting with tax regulatory authorities. These attitudes are willing to do the right thing, try to do the right thing, do not want to comply, and decided not to comply. The third part of the model aligns four compliance strategies with the four taxpayer attitudes on compliance and refers to the degree of ATO enforcement under the concept of responsive regulation. ATO prefers to simplify the tax system and promote voluntary compliance through self- regulation. If the taxpayer tries to comply, ATO should respond by helping the taxpayer be compliant. If the taxpayer is not motivated to comply, ATO should respond to the level of noncompliance with some degree of enforcement, ending with harsh sanctions for the truly noncompliant. ATO created a High Wealth Individual (HWI) taskforce to assess wealthy individuals on their probability of compliance and place them into one of four broad risk categories using its Risk Differentiation Framework (RDF). RDF is similar to the compliance model in that it is to assess the tax risk and determine the intensity of the response for those with high net wealth, ranging from minimizing burden on compliant wealthy taxpayers to aggressively pursuing the noncompliant. The four broad categories of the RDF are as follows: Higher Risk Taxpayers—ATO performs continuous risk reviews of them with the focus on enforcement. Medium Risk Taxpayers—ATO periodically reviews certain transactions from them or where there is a declining trend in effective tax performance with a focus on enforcement. Key Taxpayers—ATO continuously monitors them with the focus on service. Low Risk Taxpayers—ATO periodically monitors them with the focus on service. The HNWI program has produced financial benefits since its establishment in 1996. ATO 2008 data showed that the program had collected A$1.9 billion (US$1.67 billion) in additional revenue and reduced revenue losses by A$1.75 billion (US$1.5 billion) through compliance activities focused on highly wealthy individuals and their associated entities. ATO’s approach also has been adopted by other tax administrators. According to a 2009 Organisation for Economic Co-operation and Development (OECD) study, nine other OECD countries, including the United States, had adopted some aspect of Australia’s HNWI program. Like ATO, the IRS is taking a close look at high-income and high-wealth individuals and their related entities. In 2009, IRS formed the Global High Wealth Industry (GHWI) program to take a holistic approach to high- wealth individuals. IRS consulted with the ATO to discuss ATO’s approach to the high-wealth population as well as its operational best practices. As of February 2011, GHW field groups had a number of high-wealth individuals and several of their related entities under examination. One difference is that Australia has a separate income tax return for high- wealth taxpayers to report information on assets owned or controlled by HNWIs. In contrast, the United States has no separate tax return for high- wealth individuals and generally does not seek asset information from individuals. According to IRS officials, the IRS traditionally scores the risk of individual tax returns based on individual reporting characteristics rather than a network of related entities. However, IRS has been examining how to do risk assessments of networks through its GHWI program since 2009. Another difference is that ATO requires HNWIs to report their business networks, and IRS currently does not. Hong Kong’s Inland Revenue Department (IRD) assesses and collects the “earnings and profits tax,” which includes a Profits Tax, Salaries Tax, and Property Tax. IRD also assesses and collects certain “duties and fees” including a stamp duty, business registration fees, betting duty, and estate duty. Hong Kong only taxes income from sources within Hong Kong. Principle revenue sources for tax year 2009-10 are shown in figure 2. According to a Hong Kong tax expert, Hong Kong created the Salaries Tax at the start of World War II without using periodic tax withholding. The lack of withholding was not then, and is not now, considered to be a significant problem. The Salaries Tax is paid by about 40 percent of the estimated 3.4 million wage earners in Hong Kong, while the 60 percent are exempt from the Salaries Tax. Taxpayers whose salary income is lower than their entitlement to deductions (i.e., basic allowance, child allowance, dependent parent, etc.) are exempt from paying Salaries Tax and neither they nor IRD prepare a tax return for this income. However, exempt taxpayers may receive a tax return from IRD once every few years to verify their tax-exempt status. If these exempt taxpayers receive a tax return from IRD, they are required to complete and submit it within 1 month. The Salaries Tax rates are fairly low, according to Hong Kong tax experts. The Salaries Tax has progressive rates starting at 2 percent of the adjusted salary earned and may not exceed the standard rate of 15 percent. In comparison, the highest personal income tax rates in the EU range from about 10 percent to over 56 percent as described in appendix IV. Hong Kong does not use periodic tax withholding (e.g., biweekly or monthly) by employers to collect Salaries Taxes. Rather, IRD collects the Salaries Tax through two payments from taxpayers for a tax year, which runs from April 1 to March 31. The first payment is due in January (9 months into the tax year) and is to be 75 percent of the estimated tax for the whole year. The second payment is for the remaining 25 percent, which is due 3 months later in April—immediately after the end of the tax year. In May, IRD is to mail the tax return to the individual for the just- completed tax year based on information provided by employers and other sources. Information reporting to IRD has four parts. First, employers must report when each employee is hired and the expected annual salary amount. Second, at the end of the tax year, employers must report the salary paid to each employee. Third, the employers must report when the employee ceases employment. Fourth, employers must report and temporarily withhold payments to an employee they know intends to leave Hong Kong. If the employer fails to comply with these requirements without a reasonable excuse, penalties may be imposed. Individuals have 1 month to file the return. For those who elect to file their returns electronically, IRD will prefill the return based on information provided in their past returns and by their employers. They have a month and a half to review the prepared tax return, make any revisions such as changes to deductions, and file it with IRD. IRD reviews the filed tax returns to determine the final Salaries Tax. IRD electronically screens all returns to check for consistency between the information provided by the employer and taxpayer. Assessments will normally be made based on the higher amount reported, and taxpayers have the right to object within 1 month. IRD also can cross-check reported salary amounts with salary deductions claimed by businesses on Profit Tax returns, which should normally be supported by information returns on employee salary amounts. If the final Salaries Tax for the tax year turns out to be higher than the estimated tax assessment, taxpayers are to pay the additional tax along with the first payment of the estimated tax for the next tax year during the following January, as shown in figure 3. Several factors contribute to Hong Kong’s collection of the Salaries Tax through two payments for a tax year without resorting to periodic withholding by employers. The tax only affects about 40 percent of the wage earners who have the highest salaries and uses relatively low tax rates, making it more likely that the taxpayers will have the funds necessary to make the two payments when due. The simplicity of Hong Kong’s tax system, according to Hong Kong tax experts, makes it easier to compute tax liability and to manage the payments. IRD uses a combination of controls to assure that tax payments are made, according to a senior IRD official. In addition to information reporting, island geography contributes to the controls. Hong Kong entry/departure points are limited and tax evaders are likely to be identified. Hong Kong government can detain a tax evader from leaving or entering Hong Kong until the tax is paid. IRD has varied processes to trace the assets of delinquent taxpayers as part of collecting any unpaid tax. Culture encourages taxpayers to pay their taxes. Hong Kong experts said taxpayers fear a loss of face if they are recognized as noncompliant, which could reflect negatively on the family. A Hong Kong official told us that residents try to avoid being taken to court. An expert on public opinion in Hong Kong told us that this cultural attitude generates high tax morale. The expert told us that Hong Kong residents have high regard for Hong Kong’s government as being “cleanly” run and as putting tax revenues to good use. IRD is viewed as having fair and equal treatment of all taxpayers. A senior official of Hong Kong’s IRD believes that the Salaries Tax collection system leads to high tax compliance. Low tax rates in concert with a simple tax system that offers generous deductions and effective enforcement mean that taxpayers are fairly compliant, according to the Hong Kong official. It also means that few taxpayers use a tax preparer because the tax forms are very straightforward and the tax system is “stable.” The official also said that taxpayers comply because the cost of noncompliance is high. If a taxpayer does not pay by the due date, the costs include paying the tax liability, interest surcharges on the debt, and legal costs. Further, submitting an incorrect tax return without reasonable excuse may carry a fine of HK$10,000 (US$1,283) plus three times the amount of tax underpaid and imprisonment. Unlike Hong Kong’s twice-a-year payments for the Salaries Tax, the U.S. income tax on wages relies on periodic tax withholding. IRS provides guidance (e.g., Publication 15) on how and when employers should withhold income tax (e.g., every other week) and deposit the withheld income taxes (e.g., monthly). Further, the U.S. individual tax rates are higher and the system is more complex. These tax rates begin at 10 percent and progress to 35 percent. Further, the United States taxes many forms of income beyond salary income on the individual tax return. Nations have many choices on how to structure their tax systems across the federal, as well as state and local, government levels. The proportion of revenue collected at each governmental level can widely vary. Finland, New Zealand, and the United Kingdom (UK) have a unitary system in which government, including tax administration, is generally centralized at the national level with limited state and local government. For example, in New Zealand, the national government assessed about 90 percent of all the revenue collected across the nation. In contrast, the United States has a federal system in which the national level shares governmental authority with state and local governments. In the United States about half of all tax revenue is collected by the national government and about half is collected by the 50 states and tens of thousands of local governments. The revenue data in table 4 below were provided by each nation and compiled by the Organisation for Economic Co-operation and Development (OECD) for consistent presentation. These data cover all taxes in each nation including federal and state/local levels. Using these data, we computed the percent that each type of tax represents of the nation’s total revenue. OECD provided the following definition for each of the major categories of tax in the table: Taxes are compulsory unrequited payments to general government and are not for benefits provided by government to taxpayers in proportion to their payments. Governments include national governments and agencies whose operations are under their effective control, state and local governments and their administrations, certain social security schemes and autonomous governmental entities, excluding public enterprises. Taxes on income, profits, and capital gains cover taxes levied on the net income or profits (i.e., gross income minus allowable tax deductions) of individuals and businesses (including corporations). Also covered are taxes levied on the capital gains of individuals and enterprises, and gains from gambling. Social security contributions are classified as all compulsory payments that confer an entitlement to receive a (contingent) future social benefit. Such payments are usually earmarked to finance social benefits and are often paid to institutions of general government that provide such benefits. These social security benefits would include unemployment insurance benefits and supplements, accident, injury and sickness benefits, old-age, disability and survivors’ pensions, family allowances, reimbursements for medical and hospital expenses or provision of hospital or medical services. Contributions may be levied on both employees and employers. Taxes on payroll and workforce cover taxes paid by employers, employees, or the self-employed either as a proportion of payroll or as a fixed amount per person, and which do not confer entitlement to social benefits. Taxes on property, goods, and services cover recurrent and nonrecurrent taxes on the use, ownership, or transfer of property. These include taxes on immovable property or net wealth, taxes on the change of ownership of property through inheritance or gift, and taxes on financial and capital transactions. Taxes on goods and services include all taxes and duties levied on the production, sale, and lease of goods or services. This category covers multistage cumulative taxes; general sales taxes, value added taxes, excise taxes, or taxes levied on imports and exports of goods. Table 4 shows that the largest source of revenue for 4 of 5 countries is the tax on individuals’ and corporations’ income, profits, and capital gains. Also, the tax paid by individuals is a larger percentage of revenue than the corporation tax in each country. The tax on property, goods, and services is the next most important tax except in the UK where the income tax is the second largest source. A large component of the taxes on property, goods, and services is the value added tax and sales tax. In Australia, New Zealand, the UK, and Finland, value added tax and sales tax ranged from 25 percent to 31 percent of the taxes collected in the nation. The United States does not have a value added tax, but sales taxes alone totaled about 14 percent of all U.S. revenue. In addition to the contact named above, Thomas Short, Assistant Director; Juan P. Avila; Debra Corso; Leon Green; John Lack; Alma Laris; Andrea Levine; Cynthia Saunders; Sabrina Streagle; and Jonda VanPelt made key contributions to this report.
The Internal Revenue Service (IRS) and foreign tax administrators face similar issues regardless of the particular provisions of their laws. These issues include, for example, helping taxpayers prepare and file returns, and assuring tax compliance. Understanding how other tax administrators have used certain practices to address common issues can provide insights to help inform deliberations about tax reform and about possible administrative changes in the U.S. existing system to improve compliance, better serve taxpayers, reduce burdens, and increase efficiencies. GAO was asked to describe (1) how foreign tax administrators have approached issues that are similar to those in the U.S. tax system and (2) whether and how the IRS identifies and adopts tax administration practices used elsewhere. To do this, GAO reviewed documents and interviewed six foreign tax administrators. In some cases, GAO also interviewed tax experts, tax practitioners, taxpayers, and trade-group representatives who were selected based on their expertise or involvement in developing or using the foreign systems. GAO also examined documents and met with IRS officials. Foreign and U.S. tax administrators use many of the same practices such as information reporting, tax withholding, providing Web-based services, and finding new approaches for tax compliance. These practices, although common to each system, have important differences. Although differences in laws, culture, or other factors likely would affect the transferability of foreign tax practices to the United States, these practices may provide useful insights for policymakers and the IRS. For example, New Zealand integrates evaluations of its tax and discretionary spending programs. The evaluation of its Working For Families tax benefits and discretionary spending, which together financially assist low- and middle-income families to promote employment, found that its programs aided the transition to employment but that it still had an underserved population; these findings likely would not have emerged from separate evaluations. GAO previously has reported that the United States lacks clarity on evaluating tax expenditures and related discretionary spending programs and does not generally undertake integrated evaluations. In Finland, electronic tax administration is part of a government policy to use electronic services to lower the cost of government and encourage private-sector growth. Overall, according to Finnish officials, electronic services have helped to reduce Tax Administration staff by over 11 percent from 2003 to 2009 while improving taxpayer service. IRS officials learn about these practices based on interactions with other tax administrators and participation in international organizations, such as the Organisation for Economic Co-operation and Development. In turn, the IRS may adopt new practices based on the needs of the U.S. tax system. For example, in 2009, the IRS formed the Global High Wealth Industry program. The IRS consulted with Australia about its approach and operational practices. GAO makes no recommendations in this report.
Despite efforts undertaken by TARP to bolster capital of the largest financial institutions, market conditions in the beginning of 2009 were deteriorating and public confidence in the ability of financial institutions to withstand losses and to continue lending were further declining. On February 10, 2009, Treasury announced the Financial Stability Plan, which outlined measures to address the financial crisis and restore confidence in the U.S. financial and housing markets. The goals of the plan were to (1) restart the flow of credit to consumers and businesses, (2) strengthen financial institutions, and (3) provide aid to homeowners and small businesses. Under SCAP, the stress test would assess the ability of the largest 19 BHCs to absorb losses if economic conditions deteriorated further in a hypothetical “more adverse” scenario, characterized by a sharper and more protracted decline in gross domestic product (GDP) growth, a steeper drop in home prices, and a larger rise in the unemployment rate than in a baseline consensus scenario. BHCs that were found not to meet the SCAP capital buffer requirement under the “more adverse” scenario would need to provide a satisfactory capital plan to address any shortfall by raising funds, privately if possible. CAP, which was a key part of the plan, would provide backup capital to financial institutions unable to raise funds from private investors. Any of the 19 BHCs that participated in the stress test and had a capital shortfall could apply for capital from CAP immediately if necessary. The timeline in figure 1 provides some highlights of key developments in the implementation of SCAP. In a joint statement issued on February 10, 2009, Treasury, along with the Federal Reserve, FDIC, and OCC (collectively referred to as the SCAP regulators), committed to design and implement the stress test. According to a Treasury official, the department generally did not participate in the design or implementation of SCAP, but was kept informed by the Federal Reserve during the stress test. The SCAP regulators developed economic assumptions to estimate the potential impact of further losses on BHCs’ capital under two scenarios. The baseline scenario reflected the consensus view about the depth and duration of the recession, and the more adverse scenario reflected a plausible but deeper and longer recession than the consensus view. Regulators then calculated how much capital, if any, was required for each BHC to achieve the required SCAP buffer at the end of 2010 under the more adverse scenario. The SCAP assessment examined tier 1 capital and tier 1 common capital, and the BHCs were required to raise capital to meet any identified capital shortfall (either tier 1 capital or tier 1 common capital). Tier 1 risk-based capital is considered core capital—the most stable and readily available for supporting a bank’s operations and includes elements such as common stock and noncumulative perpetual preferred stock. SCAP’s focus on tier 1 common capital, a subset of tier 1 capital, reflects the recent regulatory push for BHCs to hold a higher quality of capital. The focus on common equity reflected both the long held view by bank supervisors that common equity should be the dominant component of tier 1 capital and increased market scrutiny of common equity ratios, driven in part by deterioration in common equity during the financial crisis. Common equity offers protection to more senior parts of the capital structure because it is the first to absorb losses in the capital structure. Common equity also gives a BHC greater permanent loss absorption capacity and greater ability to conserve resources under stress by changing the amount and timing of dividends and other distributions. To protect against risks, financial regulators set minimum standards for the capital that firms are to hold. However, SCAP set a one-time minimum capital buffer target for BHCs to hold to protect against losses and preprovision net revenue (PPNR) that were worse than anticipated during the 2009 to 2010 period. For the purposes of SCAP, the one-time target capital adequacy ratios are at least 6 percent of risk-weighted assets in tier 1 capital and at least 4 percent in tier 1 common capital projected as of December 31, 2010. For the purposes of the projection, the regulators assumed that BHCs would suffer the estimated losses and earned revenues in 2009 and 2010 in the more adverse scenario. SCAP regulators conducted the stress test strictly on the BHCs’ assets as of December 31, 2008, and—with the exception of off-balance sheet positions subject to Financial Accounting Statements No. 166 and 167, which assumed in the analysis to come on balance sheet as of January 1, 2010—did not take int account any changes in the composition of their balance sheets over the year time frame. Stress testing is one of many risk management tools used by both BHCs and regulators. Complex financial institutions need management information systems that can help firms to identify, assess, and manage a full range of risks across the whole organization arising from both internal and external sources and from assets and obligations that are found both on and off the BHC’s balance sheet. This approach is intended to help ensure that a firmwide approach to managing risk has been viewed as being crucial for responding to rapid and unanticipated changes in financial markets. Risk management also depends on an effective corporate governance system that addresses risk across the institution and also within specific areas, such as subprime mortgage lending. The board of directors, senior management, audit committee, internal auditors, external auditors, and others play important roles in effectively operating a risk management system. The different roles of each of these groups represent critical checks and balances in the overall risk management system. However, the management information systems at many financial institutions have been called into question since the financial crisis began in 2007. Identified shortcomings, such as lack of firmwide stress testing, have led banking organizations and their regulators to reassess capital requirements, risk management practices, and other aspects of bank regulation and supervision. Stress testing has been used throughout the financial industry for more than 10 years, but has recently evolved as a risk management tool in response to the urgency of the financial crisis. The main evolution is towards the use of comprehensive firmwide stress testing as an integral and critical part of firms’ internal capital adequacy assessment processes. In the case of SCAP, the intent of the stress test was to help ensure that the capital held by a BHC is sufficient to withstand a plausible adverse economic environment over the 2-year time frame ending December 31, 2010. The Basel Committee on Banking Supervision (Basel Committee) issued a document in May 2009 outlining several principles for sound stress testing practices and supervision. The Basel Committee document endorses stress testing by banks as a part of their internal risk management to assess the following: Credit risk. The potential for financial losses resulting from the failure of a borrower or counterparty to perform on an obligation. Market risk. The potential for financial losses due to an increase or decrease in the value of an asset or liability resulting from broad price movements; for example, in interest rates, commodity prices, stock prices, or the relative value of currencies (foreign exchange). Liquidity risk. The potential for financial losses due to an institution’s failure to meet its obligations because it cannot liquidate assets or obtain adequate funding. Operational risk. The potential for unexpected financial losses due to a wide variety of institutional factors including inadequate information systems, operational problems, breaches in internal controls, or fraud. Legal risk. The potential for financial losses due to breaches of law or regulation that may result in heavy penalties or other costs. Compliance risk. The potential for loss arising from violations of laws or regulations or nonconformance with internal policies or ethical standards. Strategic risk. The potential for loss arising from adverse business decisions or improper implementation of decisions. Reputational risk. The potential for loss arising from negative publicity regarding an institution’s business practices. According to SCAP regulators and many market participants we interviewed, the process used to design and implement SCAP was effective in promoting coordination and transparency among the regulators and participating BHCs, but some SCAP participants we interviewed expressed concerns about the process. The majority of supervisory and bank industry officials we interviewed stated that they were satisfied with how SCAP was implemented, especially considering the stress test’s unprecedented nature, limited time frame, and the uncertainty in the economy. SCAP established a process for (1) coordinating and communicating among the regulators and with the BHCs and (2) promoting transparency of the stress test to the public. In addition, according to regulators, the process resulted in a methodology that yielded credible results and by design helped to assure that the BHCs would be sufficiently capitalized to weather a more adverse economic downturn. Robust coordination and communication are essential to programs like SCAP when bringing together regulatory staff from multiple agencies and disciplines to effectively analyze complex financial institutions and understand the interactions among multiple layers of risk. Moreover, supervisory guidance emphasizes the importance of coordination and communication among regulators to both effectively assess banks and conduct coordinated supervisory reviews across a group of peer institutions, referred to as “horizontal examinations.” The regulators implemented each phase of SCAP in a coordinated interagency fashion. Also, while some disagreed, most regulators and market participants we interviewed were satisfied with the level of coordination and communication. They also thought that the SCAP process could serve as a model for future supervisory efforts. The regulators executed the SCAP process in three broad phases: In the first phase, the Analytical Group, comprising interagency economists and supervisors, generated two sets of economic conditions— a baseline scenario and a more adverse scenario with a worse-than- expected economic outcome—and then used these scenarios to aid in estimating industrywide indicative loan loss rates. To develop these scenarios, the Analytical Group used three primary indicators of economic health: the U.S. GDP, housing prices in 10 key U.S. cities, and the annual average U.S. unemployment rate. The baseline scenario reflected the consensus view of the course for the economy as of February 2009, according to well-known professional economic forecasters. The Federal Reserve developed the more adverse scenario from the baseline scenario by taking into account the historical accuracy of the forecasts for unemployment and the GDP and the uncertainty of the economic outlook at that time by professional forecasters. The Federal Reserve also used regulators’ judgment about the appropriate severity of assumed additional stresses against which BHCs would be required to hold a capital buffer, given that the economy was already in a recession at the initiation of SCAP. In the second phase, several Supervisory Analytical and Advisory Teams— comprising interagency senior examiners, economists, accountants, lawyers, financial analysts, and other professionals from the SCAP regulators—collected, verified, and analyzed each BHC’s estimates for losses, PPNR, and allowance for loan and lease losses (ALLL). The teams also collected additional data to evaluate the BHC’s estimates, and to allow supervisors to develop their own independent estimates of losses for loans, trading assets, counterparty credit risk, and securities and PPNR for each BHC. In the third phase, the Capital Assessment Group, comprising interagency staff, served as the informal decision-making body for SCAP. The Capital Assessment Group developed a framework for combing the Supervisory Analytical and Advisory Teams’ estimates with other independent supervisory estimates of loan losses and resources available to absorb these losses. They evaluated the estimates by comparing across BHCs and by aggregating over the 19 BHCs to check for consistency with the specified macroeconomic scenarios to calculate the amount, if any, of additional capital needed for each BHC to achieve the SCAP buffer target capital ratios as of December 31, 2010, in the more adverse economic environment. Lastly, the Capital Assessment Group set two deadlines: (1) June 8, 2009, for BHCs requiring capital to develop and submit a capital plan to the Federal Reserve on how they would meet their SCAP capital shortfall and (2) November 9, 2009, for these BHCs to raise the required capital. A key component of this process was the involvement of multidisciplinary interagency teams that leveraged the skills and experiences of staff from different disciplines and agencies. The Federal Reserve, OCC, and FDIC had representatives on each SCAP team (the Analytical Group, Supervisory Analytical and Advisory Teams, and the Capital Assessment Group). For example, OCC officials said that they contributed to the development of quantitative models required for the implementation of SCAP and offered their own models for use in assessing the loss rates of certain portfolios. In addition, each of the SCAP regulators tapped expertise within their organization for specific disciplines, such as accounting, custodial banking, macroeconomics, commercial and industry loan loss modeling, and consumer risk modeling. According to the FDIC, the broad involvement of experts from across the agencies helped validate loss assumptions and also helped improve confidence in the results. Further, these officials noted that the SCAP process was enhanced because productive debate became a common event as team members from different regulatory agencies and disciplines brought their own perspectives and ideas to the process. For example, some SCAP staff argued for a more moderate treatment of securities in BHCs’ available for sale portfolios, which would have been consistent with generally accepted accounting principles under a new change in accounting standards. They maintained that the modified accounting standard for declines in market value (and discounting the impact of liquidity premia) that had been implemented after the stress test was announced and before the numbers had been finalized was in some ways more reflective of the realized credit loss expectations for the affected securities. After significant discussion, the regulators decided to allow for the accounting change in the baseline loss estimates, but not in the more adverse scenario estimates. They believed that under the more adverse scenario there was a heightened possibility of increased liquidity demands on banks and that many distressed securities would need to be liquidated at distressed levels. Consequently, for securities found to be other than temporarily impaired in the more adverse scenario, they assumed the firm would have to realize all unrealized losses (i.e., write down the value of the security to market value as of year end 2008). Similarly, some staff argued against adopting other changes in accounting standards that were expected to impact BHCs’ balance sheets, including their capital adequacy. Primary among these was the inclusion of previously off-balance sheet items. As noted above, ultimately, the more conservative approach prevailed and the expected inclusion of these assets was addressed in SCAP. To facilitate coordination, the Federal Reserve instituted a voting system to resolve any contentious issues, but in practice differences among regulators were generally resolved through consensus. When SCAP regulators met, the Federal Reserve led the discussions and solicited input from other regulators. For example, officials from OCC and FDIC both told us that they felt that they were adequately involved in tailoring the aggregate loss estimates to each BHC as part of the determination of each BHC’s SCAP capital requirement. SCAP regulators were also involved in drafting the design and results documents, which were publicly released by the Federal Reserve. Representatives from most of the BHCs were satisfied with the SCAP regulators’ coordination and communication. Many of the BHC officials stated that they were generally impressed with the onsite SCAP teams and said that these teams improved the BHCs’ coordination and communication with the regulators. BHC officials said that they usually received answers to their questions in a timely manner, either during conference calls held three times a week, through the distribution of answers to frequently asked questions, or from onsite SCAP examiners. Collecting and aggregating data were among the most difficult and time- consuming tasks for BHCs, but most of them stated that the nature of the SCAP’s requests were clear. At the conclusion of SCAP, the regulators presented the results to each of the institutions showing the final numbers that they planned to publish. The SCAP process included steps to promote transparency, such as the release of key program information to SCAP BHCs and the public. According to SCAP regulators, BHCs, and credit rating agency officials we interviewed, the release of the results provided specific information on the financial health and viability of the 19 largest BHCs regarding their ability to withstand additional losses during a time of significant uncertainty. Many experts have said that the lack of transparency about potential losses from certain assets contributed significantly to the instability in financial markets during the current crisis. Such officials also stated that publicly releasing the methodology and results of the stress test helped strengthen market confidence. Further, many market observers have commented that the Federal Reserve’s unprecedented disclosure of sensitive supervisory information for each BHC helped European bank regulators decide to publicly release detailed results of their own stress tests in July 2010. Not all SCAP participants agreed that the SCAP process was fully transparent. For example, some participants questioned the transparency of certain assumptions used in developing the stress test. According to BHC officials and one regulator, the Federal Reserve could have shared more detailed information about SCAP loss assumptions and calculations with BHCs. According to several BHC officials, the Federal Reserve did not fully explain the methodology for estimating losses but expected BHC officials to fully document and provide supporting data for all of their assumptions. Without knowing the details of the methodology, according to some BHC officials, they could not efficiently provide all relevant information to SCAP examiners. SCAP regulators aimed to ensure that SCAP sufficiently stressed BHCs’ risk exposures and potential PPNR under the more adverse scenario. To accomplish this, the regulators made what they viewed to be conservative assumptions and decisions in the following areas. First, the regulators decided to stress only assets that were on the BHCs’ balance sheets as of December 31, 2008, (i.e., a static approach) without accounting for new business activity. According to BHC officials, new loans were thought to have generally been of better quality than legacy loans because BHCs had significantly tightened their underwriting standards since the onset of the financial crisis. As a result, BHCs would have been less likely to charge- off these loans within the SCAP time period ending December 31, 2010, resulting in the potential for greater reported revenue estimates for the period. By excluding earnings from new business, risk-weighted assets were understated, charge-off rates were overstated, and projected capital levels were understated. Second, SCAP regulators generally did not allow the BHCs to cut expenses to address the anticipated drop in revenues under the more adverse scenario. However, some BHC officials told us that they would likely cut expenses, including initiating rounds of layoffs, if the economy performed in accordance with the more adverse economic scenario, especially if they were not generating any new business. Federal Reserve officials noted that BHCs were given credit in the stress test for cost cuts made in the first quarter of 2009. Third, some BHCs were required to assume an increase in their ALLL as of the end of 2010, if necessary, to ensure adequate reserves relative to their year end 2010 portfolio. Some BHC officials believed that this requirement resulted in the BHCs having to raise additional capital because the required ALLL increases were subtracted from the revenue estimates in calculating the resources available to absorb losses. This meant that some BHCs judged to have insufficient year end 2010 reserve adequacy had to account for this shortcoming in the calculation of capital needed to meet the SCAP targeted capital requirements as of the end of 2010 while maintaining a sufficient ALLL for 2011 losses under the more adverse economic scenario. According to some BHCs, the size of the 2010 ALLL was severe given the extent of losses are already included in the 2009 and 2010 loss estimates and effectively stressed BHCs for a third year. Finally, according to many BHC officials and others, the calculations used to derive the loan loss rates and other assumptions to stress the BHCs were conservative (i.e., more severe). For example, the total loan loss rate estimated by the SCAP regulators was 9.1 percent, which was greater than the historical 2-year loan loss rates at all commercial banks from 1921 until 2008, including the worst levels seen during the Great Depression (see figure 2). However, the macroeconomic assumptions of the more adverse scenario, which we will discuss later in the report, did not meet the definition of a depression. Specifically, a 25 percent unemployment rate coupled with economic contraction is indicative of a depression. In contrast, the more adverse scenario estimated approximately a 10 percent unemployment rate with some economic growth in late 2010. SCAP regulators also estimated ranges for loan loss rates within specific loan categories using the baseline and more adverse scenarios as guides. They used a variety of methods to tailor loan losses to each BHC, including an analysis of past BHC losses and quantitative models, and sought empirical support from BHCs regarding the risk level of their portfolios. However, some BHCs told us that the Federal Reserve made substantial efforts to help ensure conformity with the indicative loan loss rates while incorporating BHC-specific information where possible and reliable. Table 1 compares the different indicative loan loss rate ranges under the more adverse scenario for each asset category with actual losses in 2009 for SCAP BHCs and the banking industry. Some BHCs stated that the resulting loan loss rates were indicative of an economy worse off than that represented by the more adverse macroeconomic assumptions, although they recognized the need for the more conservative approach. However, nearly all agreed that the loan loss rates were a more important indication of the stringency of SCAP than the assumptions. After the public release of the SCAP methodology in April 2009, many observers commented that the macroeconomic assumptions for a more adverse economic downturn were not severe enough given the economic conditions at that time. In defining a more adverse economic scenario, the SCAP regulators made assumptions about the path of the economy using three broad macroeconomic indicators—changes in real GDP, the unemployment rate, and home prices—during the 2-year SCAP period ending December 2010. The actual performances of GDP and home prices have performed better than assumed under the more adverse scenario. However, the actual unemployment rate has more closely tracked the more adverse scenario (see figure 3). Further, as noted earlier, some regulatory and BHC officials have indicated that the loan loss rates that the regulators subsequently developed were more severe than one would have expected under the macroeconomic assumptions. While our analysis of actual and SCAP estimated indicative loan losses (see table 1) is generally consistent with this view, these estimates were developed at a time of significant uncertainty about the direction of the economy and the financial markets, as well as an unprecedented deterioration in the U.S. housing markets. SCAP largely met its goals of increasing the level and quality of capital held by the 19 largest BHCs and, more broadly, of strengthening market confidence in the banking system. The stress test identified 10 of the 19 BHCs as needing to raise a total of about $75 billion in additional capital. The Federal Reserve encouraged the BHCs to raise common equity via private sources—for example, through new common equity issuances, conversion of existing preferred equity to common equity, and sales of businesses or portfolios of assets. Nine of the 10 BHCs were able to raise the required SCAP amount of new common equity in the private markets by the November 9, 2009, deadline (see table 2). Some of these BHCs also raised capital internally from other sources. GMAC LLC (GMAC) was the only BHC that was not able to raise sufficient private capital by the November 9, 2009, deadline. On December 30, 2009, Treasury provided GMAC with a capital investment of $3.8 billion to help fulfill its SCAP capital buffer requirement, drawing funds from TARP’s Automotive Industry Financing Program. A unique and additional element of the estimated losses for GMAC included the unknown impact of possible bankruptcy filings by General Motors Corporation (GM) and Chrysler LLC (Chrysler). Thus, a conservative estimate of GMAC’s capital buffer was developed in response to this possibility. The Federal Reserve, in consultation with Treasury, subsequently reduced GMAC’s SCAP required capital buffer by $1.8 billion—$5.6 billion to $3.8 billion—primarily to reflect the lower-than-estimated actual losses from the bankruptcy proceedings of GM and Chrysler. GMAC was the only company to have its original capital buffer requirement reduced. Capital adequacy generally improved across all 19 SCAP BHCs during 2009. As shown in table 3, the largest gains were in tier 1 common capital, which increased by about 51 percent in the aggregate across the 19 BHCs, rising from $412.5 billion on December 31, 2008, to $621.9 billion by December 31, 2009. On an aggregate basis, the tier 1 common capital ratio at BHCs increased from 5.3 percent to 8.3 percent of risk-weighted assets (compared with the SCAP threshold of 4 percent at the end of 2010). The tier 1 risk-based capital ratio also grew from 10.7 percent to 11.3 percent of risk-weighted assets (compared with the SCAP threshold of 6 percent at the end of 2010). While these ratios were helped to some extent by reductions in risk-weighted assets, which fell 4.3 percent from $7.815 trillion on December 31, 2008, to $7.481 trillion on December 31, 2009, the primary driver of the increases was the increase in total tier 1 common capital. The quality of capital—measured as that portion of capital made up of tier 1 common equity—also increased across most of the BHCs in 2009. The tier 1 common capital ratio increased at 17 of the 19 BHCs between the end of 2008 and the end of 2009 (see table 4). Citigroup Inc. (Citigroup) and The Goldman Sachs Group, Inc. (Goldman Sachs) had the largest increases in tier 1 common capital ratios—747 and 450 basis points, respectively. However, GMAC’s tier 1 common capital ratio declined by 155 basis points in this period to 4.85 percent. MetLife, Inc. was the only other BHC to see a drop in its tier 1 common capital ratio, which fell by 33 basis points to 8.17 percent and still more than double the 4 percent target. Based on the SCAP results document, the 2008 balances in the table include the impact of certain mergers and acquisitions, such as Bank of America Corporation’s (Bank of America) purchase of Merrill Lynch & Co. Inc. Further, the increase in capital levels reflects the capital that was raised as a result of SCAP. As previously stated by interviewees, the unprecedented public release of the stress test results helped to restore investors’ confidence in the financial markets. Some officials from participating BHCs and credit rating agencies also viewed the BHCs’ ability to raise the capital required by the stress test as further evidence of SCAP’s success in increasing market confidence and reducing uncertainty. But some expressed concerns that the timing of the announcement of SCAP on February 10, 2009—nearly 3 months before the results were released on May 7, 2009—may have intensified market uncertainty about the financial health of the BHCs. A broad set of market indicators also suggest that the public release of SCAP results may have helped reduce uncertainty in the financial markets and increased market confidence. For example, banks’ renewed ability to raise private capital reflects improvements in perceptions of the financial condition of banks. Specifically, banks and thrifts raised significant amounts of common equity in 2008, totaling $56 billion. Banks and thrifts raised $63 billion in common equity in the second quarter of 2009 (see figure 4). The substantial increase in second quarter issuance of common equity occurred after the stress test results were released on May 7, 2009, and was dominated by several SCAP institutions. Similarly, stock market prices since the release of the stress test results in May 2009 through October 2009 improved substantially in the overall banking sector and among the 18 public BHCs that participated in SCAP (see figure 5). The initial increase since May 2009 also suggests that SCAP may have helped bolster investor and public confidence. However, equity markets are generally volatile and react to a multitude of events. Credit default swap spreads, another measure of confidence in the banking sector, also improved. A credit default swap is an agreement in which a buyer pays a periodic fee to a seller in exchange for protection from certain credit events such as bankruptcy, failure to pay debt obligations, or a restructuring related to a specific debt issuer or issues known as the reference entity. Therefore, the credit default swap spread, or market price, is a measure of the credit risk of the reference entity, with a higher spread indicating a greater amount of credit risk. When the markets’ perception of the reference entity’s credit risk deteriorates or improves, the spread generally will widen or tighten, respectively. Following the SCAP results release in May 2009, the credit default swap spreads continued to see improvements (see figure 6). While many forces interact to influence investors’ actions, these declining spreads suggest that the market’s perception of the risk of banking sector defaults was falling. Further, the redemption of TARP investments by some banking institutions demonstrated that regulators believed these firms could continue to serve as a source of financial and managerial strength, as well as fulfill their roles as intermediaries that facilitate lending, while both reducing reliance on government funding and maintaining adequate capital levels. This positive view of the regulators may also have helped increase market confidence in the banking system (see appendix II for details on the status of TARP investments in the institutions participating in SCAP). As of the end of 2009, while the SCAP BHCs generally had not experienced the level of losses that were estimated on a pro rata basis under the stress test’s more adverse economic scenario, concerns remain that some banks could absorb potentially significant losses in certain asset categories that would erode capital levels. Collectively, the BHCs’ total loan losses of $141.2 billion were approximately 38 percent less than the GAO-calculated $229.4 billion in pro rata losses under the more adverse scenario for 2009 (see table 5). The BHCs also experienced significant gains in securities and trading and counterparty credit risk portfolios compared with estimated pro rata losses under SCAP. Total resources other than capital to absorb losses (resources) were relatively close to the pro rata amount, exceeding it by 4 percent. In tracking BHCs’ losses and resources against the SCAP estimates, we compared the actual results with those estimated under the more adverse scenario. We used the 2-year estimates of the more adverse scenario from the SCAP results and annualized those amounts by dividing them in half (the “straight line” method) to get pro rata loss estimates for 2009 because the SCAP regulators did not develop estimates on a quarterly or annual basis. A key limitation of this approach is that it assumes equal distribution of losses, revenues, expenses, and changes to reserves over time, although these items were unlikely to be distributed evenly over the 2-year period. Another important consideration is that actual results were not intended and should not be expected to align with the SCAP projections. Actual economic performance in 2009 differed from the SCAP macroeconomic variable inputs, which were based on a scenario that was more adverse than was anticipated or than occurred, and other forces in the business and regulatory environment could have influenced the timing and level of losses. Appendix I contains additional details on our methodology, including our data sources and calculations, for tracking BHCs’ financial performance data. Although the 19 BHCs’ actual combined losses were less than the 2009 pro rata loss estimates for the more adverse scenario, the loss rates varied significantly by individual BHCs. For example, most of the BHCs had consumer and commercial loan losses that were below the pro rata loss estimates, but three BHCs—GMAC, Citigroup, and SunTrust Banks Inc. (SunTrust)—exceeded these estimates in at least one portfolio (see figure 7). GMAC was the only one with 2009 loan losses on certain portfolios that exceeded SCAP’s full 2-year estimate. Specifically, GMAC exceeded the SCAP 2-year estimated losses in the first-lien, second/junior lien, and commercial real estate portfolios and the 1-year pro rata losses in the “Other” portfolio; Citigroup exceeded the 1-year pro rata estimated losses in the commercial and industrial loan portfolio; and SunTrust exceeded the 1-year estimated losses in the first-lien and credit card portfolios. Appendix III provides detailed data on the individual performance of each of the BHCs. GMAC faced particular challenges in the first year of the assessment period and posed some risk to the federal government, a majority equity stakeholder. GMAC’s loan losses in its first-lien portfolio were $2.4 billion, compared with the $2 billion projected for the full 2-year period. In the second/junior lien portfolio, GMAC saw losses of $1.6 billion, compared with the $1.1 billion estimated losses for the 2 years. GMAC experienced losses of $710 million in its commercial real estate portfolio, compared with $600 million projected for the full 2-year period. Further, in its “Other” portfolio (which is comprised of auto leases and consumer auto loans), GMAC’s losses were $2.1 billion, exceeding the 1-year pro rata $2 billion loss estimate. With a tier 1 common capital ratio of 4.85 percent— just more than the SCAP threshold of 4 percent—at the end of 2009, GMAC has a relatively small buffer in the face of potential losses. GMAC’s position should be placed in context, however, because it is relatively unique among the SCAP participants. It was the only nonpublicly traded participant, and the federal government owns a majority equity stake in the company as a result of capital investments made through the Automotive Industry Financing Program under TARP. Further, GMAC’s core business line—financing for automobiles—is dependent on the success of efforts to restructure, stabilize, and grow General Motors Company and Chrysler Group LLC. Finally, the Federal Reserve told us that because GMAC only recently became a BHC and had not previously been subject to banking regulations, it would take some time before GMAC was fully assimilated into a regulated banking environment. To improve its future operating performance and better position itself to become a public company in the future, GMAC officials stated that the company posted large losses in the fourth quarter of 2009 as result of accelerating its recognition of lifetime losses on loans. In addition, the company has been restructuring its operations and recently sold off some nonperforming assets. However, the credit rating agencies we met with generally believed that there could still be further losses at GMAC, although the agencies were less certain about the pace and level of those losses. Two of the agencies identified GMAC’s Residential Capital, LLC mortgage operation as the key source of potential continued losses. Given that market conditions have generally improved, the BHCs’ investments in securities and trading account assets performed considerably better in 2009 than had been estimated under the pro rata more adverse scenario. The SCAP assessment of the securities portfolio consisted of an evaluation for possible impairment of the portfolio’s assets, including Treasury securities, government agency securities, sovereign debt, and private sector securities. In the aggregate, the securities portfolio has experienced a gain of $3.5 billion in 2009, compared with a pro rata estimated loss of $17.6 billion under the stress test’s more adverse scenario. As figure 8 shows, 5 of the 19 BHCs recorded securities losses in 2009, 13 recorded gains, and 1 (Morgan Stanley) recorded no gain or loss. Losses were projected at 17 of the BHCs under the pro rata more adverse scenario, and SCAP regulators did not consider the remaining 2 BHCs (American Express Company and Morgan Stanley) to be applicable for this category. In the securities portfolio, The Bank of New York Mellon Corporation had losses greater than estimated under SCAP for the full 2-year period. The variances could be due to a number of factors, including the extent to which a BHC decides to deleverage, how their positions react to changing market values, and other factors. To estimate trading and counterparty losses, SCAP regulators assumed that these investments would be subject to the change in value of a proportional level as experienced in the last half of 2008. The trading portfolio shows an even greater difference between the 1-year pro rata estimates and the actual performance—a gain of $56.9 billion in 2009 rather than the pro rata $49.7 billion estimated loss under the more adverse scenario (see table 5). The stress test only calculated trading and counterparty credit loss estimates for the five BHCs with trading assets that exceeded $100 billion. All five had trading gains as opposed to losses, based on the publicly available data from the Y-9C. These gains were the result of a number of particular circumstances. First, the extreme spreads and risk premium resulting from the lack of liquidity during the financial crisis—especially in the second half of 2008—reversed in 2009, improving the pricing of many risky trading assets that remained on BHCs’ balance sheets. Because the trading portfolio is valued at fair value, it had been written down for the declines in value that occurred throughout 2008 and the first quarter of 2009 and saw significant gains when the market rebounded through the remainder of 2009. Second, the crisis led to the failure or absorption of several large investment banks, reducing the number of competitors and, according to our analysis of Thomson Reuters Datastream, increased market share and pricing power for the remaining firms. Finally, the Federal Reserve’s low overnight bank lending rates (near 0 percent) have prevailed for a long period and have facilitated a favorable trading environment for BHCs. This enabled BHCs to fund longer-term, higher yielding assets in their trading portfolios with discounted wholesale funding (see figure 9). Potentially large losses in consumer and commercial loans continue to challenge SCAP BHCs, and addressing these challenges depends on a variety of factors, including, among other things, the effectiveness of federal efforts to reduce foreclosures in the residential mortgage market. The BHCs absorbed nearly $400 billion in losses in the 18 months ending December 31, 2008. As they continue to experience the effects of the recent financial crisis, estimating precisely how much more they could lose is difficult. In March 2010, officials from two credit rating agencies indicated that 50 percent or more of the losses the banking industry was expected to incur during the current financial crisis could still be realized if the economy were to suffer further stresses. Data for the 19 BHCs show a rapid rise in the percentage of nonperforming loans over the course of 2009 (see figure 10). Specifically, total nonperforming loans grew from 1 percent in the first quarter of 2007 to 6.6 percent in the fourth quarter of 2009 for SCAP BHCs. In particular, increases in total nonperforming loans were driven by significant growth in nonperforming first-lien mortgages and commercial real estate loans. Standard & Poor’s Corporation noted that many nonperforming loans may ultimately have to be charged-off, exposing the BHCs to further potential losses. According to the credit rating agencies that we interviewed, federal housing policy to aid homeowners who are facing foreclosures, as well as time lags in the commercial real estate markets, will likely continue to affect the number of nonperforming loans for the remainder of the SCAP time frame (December 2010). The total amount of resources other than capital to absorb losses (resources) has tracked the amount GAO prorated under the stress test’s more adverse scenario. Resources measure how much cushion the BHCs have to cover loans losses. As shown previously in table 5, the aggregate actual results through the end of 2009 for resources showed a total of $188.4 billion, or 4 percent more than GAO’s pro rata estimated $181.5 billion in the stress test’s more adverse scenario. Eleven of the 19 BHCs tracked greater than the pro rata estimated amount in 2009, while the remaining 8 tracked less than the estimate (see figure 11). GMAC and MetLife, Inc. had negative resources in 2009, although only GMAC was projected to have negative resources over the full 2-year period. Our calculation considers increases in ALLL during 2009 to be a drain on resources in order to mirror the regulators’ calculation for the full 2-year projection. However, the ALLL may ultimately be used as a resource in 2010, causing available resources to be higher than they currently appear in our tracking. PPNR is based on numerous factors, including interest income, trading revenues, and expenses. The future course of this resource will be affected by factors such as the performance of the general economy, the BHCs’ business strategies, and regulatory changes, including the Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010 (Dodd-Frank Act) and the Credit Card Accountability, Responsibility, and Disclosure Act of 2009. Such regulatory changes could impose additional costs or reduce future profitability, either of which would impact future PPNR. The SCAP stress test provided lessons in a number of areas that can be incorporated in the bank supervision process and used to improve BHCs’ risk management practices. First, the transparency that was part of SCAP helped bolster market confidence, but the Federal Reserve has not yet developed a plan that incorporates transparency into the supervisory process. Second, the SCAP experience highlighted that BHCs’ stress tests in the past were not sufficiently comprehensive and we found that regulators’ oversight of these tests has been generally weak. Third, we identified opportunities to enhance both the process and data inputs for conducting stress testing in the future. Finally, SCAP demonstrated the importance of robust coordination and communication among the different regulators as an integral part of any effective supervisory process. By incorporating these lessons going forward, regulators will be able to enhance their ability to efficiently and effectively oversee the risk- taking in the banking industry. As stated earlier and as agreed generally by market participants, the public release of the SCAP design and results helped restore confidence in the financial system during a period of severe turmoil. Some agency officials stated that their experience in implementing SCAP suggested that greater transparency would also be beneficial in the supervisory process. In recent statements, the chairman and a governor of the Federal Reserve have both stated that, while protecting the confidentiality of firm-specific proprietary information is imperative, greater transparency about the methods and conclusions of future stress tests could benefit from greater scrutiny by the public. The Federal Reserve governor also noted that feedback from the public could help to improve the methodologies and assumptions used in the supervisory process. In addition, they noted that more transparency about the central bank’s activities overall would ultimately enhance market discipline and that the Federal Reserve is looking at ways to enhance its disclosure policies. Consistent with the goal of greater transparency, we previously recommended that the Federal Reserve consider periodically disclosing to the public the aggregate performance of the 19 BHCs against the SCAP estimates for the 2-year forecast period. Subsequently, the chairman and a governor of the Federal Reserve have publicly disclosed 2009 aggregate information about the performance of the 19 BHCs based on the Federal Reserve’s internal tracking. As the 2-year SCAP period comes to a close at the end of 2010, completing a final analysis that compares the performance of BHCs with the estimated performance under the more adverse economic scenario would be useful; however, at the time of the review, Federal Reserve officials told us that they have not decided whether to conduct and publicly release any type of analysis. Given that the chairman and a governor of the Federal Reserve have already publicly disclosed some aggregate BHC performance against the more adverse scenario for 2009, providing the 2-year results would provide the public with consistent and reliable information from the chief architect of the stress test that could be used to further establish the importance of understanding such tests and consider lessons learned about the rigor of the stress test estimates. Increasing transparency in the bank supervisory process is a more controversial issue to address. Supervisory officials from OCC (including the then Comptroller) and the Federal Reserve question the extent to which greater transparency would improve day-to-day bank supervision. And, some BHCs we interviewed also were against public disclosure of future stress tests results. They noted that SCAP was a one-time stress test conducted under unique circumstances. Specifically, during the financial crisis, Treasury had provided a capital backstop for BHCs that were unable to raise funds privately. They expressed concern that public disclosure of certain unfavorable information about individual banks in a normal market environment could cause depositors to withdraw funds en masse creating a “run” on the bank. In addition, banks that appear weaker than their peers could be placed at a competitive disadvantage and may encourage them to offer more aggressive rates and terms for new depositors, thereby increasing their riskiness and further affecting their financial stability. While these concerns are valid and deserve further consideration, they have to be weighed against the potential benefits of greater transparency about the financial health of financial institutions and the banking system in general to investors, creditors, and counterparties. The Dodd-Frank Act takes significant steps toward greater transparency. For example, the act requires the Federal Reserve to perform annual stress tests on systematically significant institutions and publicly release a summary of results. Also, the act requires each of the systematically significant institutions to publicly report the summary of internal stress tests semiannually. Given comments by its senior leadership, the Federal Reserve is willing to engage in a constructive dialogue about creating a plan for greater transparency that could benefit the entire financial sector. The other federal bank regulators—FDIC, OCC, and the Office of the Thrift Supervision—are also critical stakeholders in developing such a plan. While Federal Reserve officials have discussed possible options for increasing transparency, the regulators have yet to engage in a formal dialogue about these issues and have not formalized a plan for the public disclosure of regulatory banking information or developed a plan for integrating public disclosures into the ongoing supervisory process. Without a plan for reconciling these divergent views and for incorporating steps to enhance transparency into the supervisory process and practices, including the public disclosure of certain information, bank regulators may miss a significant opportunity to enhance market discipline by providing investors, creditors, and counterparties with information such as bank asset valuations. SCAP highlighted that the development and utilization of BHCs’ stress tests were limited. Further, BHC officials noted that they failed to adequately stress test for the effects of a severe economic downturn scenario and did not test on a firmwide basis or test frequently enough. We also found that the regulator’s oversight of these tests were weak, reinforcing the need for more rigorous and firmwide stress testing, better risk governance processes by BHCs, and more vigorous oversight of BHCs’ stress tests by regulators. Going forward, as stress tests become a fundamental part of oversights of individual banks and the financial system, more specific guidance needs to be developed for examiners. BHCs and regulators stated that they are taking steps to address these shortcomings. Prior to SCAP, many BHCs generally performed stress tests on individual portfolios, such as commercial real estate or proprietary trading, rather than on a firmwide basis. SCAP led some institutions to look at their businesses in the aggregate to determine how losses would affect the holding company’s capital base rather than individual portfolios’ capital levels. As a result, some BHC officials indicated that they had begun making detailed assessments of their capital adequacy and risk management processes and are making improvements. Officials from one BHC noted that before SCAP their financial and risk control teams had run separate stress tests, but had not communicate or coordinate with each other about their stress testing activities. Officials from another BHC noted that their senior management and board of directors were not actively involved in the oversight of the stress testing process. These officials said that since participating in SCAP, they have improved in these areas by institutionalizing the internal communication and coordination procedures between the financial risk and control teams, and by increasing communication with senior management and board of directors about the need for active involvement in risk management oversight, respectively. These improvements can enhance the quality of the stress testing process. Moreover, officials of BHCs that were involved in ongoing bank mergers during the SCAP process credited SCAP with speeding up of the conversion process of the two institutions’ financial systems since the BHCs’ staff had to work together to be able to quickly provide, among other things, the aggregate asset valuations and losses of the combined firm’s balance sheets to the regulators. BHC officials also stated that their stress tests would take a firmwide view, that is, taking into account all business units and risks within the holding company structure and would include updates of the economic inputs used to determine potential losses and capital needs in adverse scenarios. One BHC noted that it had developed several severe stress scenarios for liquidity because the recent financial crisis had shown that liquidity could deteriorate more quickly than capital, endangering a company’s prospects for survival. This danger became evident in the failures of major financial institutions during the recent financial crisis— for example, IndyMac Bank, Lehman Brothers, and Bear Stearns. Officials from many SCAP BHCs and the Federal Reserve noted that internal bank stress test models generally did not use macroeconomic assumptions and loss rates inputs as conservative as those used in the SCAP stress test. According to Federal Reserve officials, using the SCAP macroeconomic assumptions, most of the 19 BHCs that took part in SCAP initially determined that they would not need additional capital to weather the more adverse scenario. However, the SCAP test results subsequently showed that more than half of them (10 of 19) did need to raise capital to meet the SCAP capital buffer requirements. Some BHCs indicated that future stress tests would be more comprehensive than SCAP. BHCs can tailor their stress test assumptions to match their specific business models, while SCAP generally used a one-size-fits-all assumptions approach. For example, some BHCs noted that they use macroeconomic inputs (such as disability claims, prolonged stagflation, or consumer confidence) that were not found in the SCAP stress test. Although the Federal Reserve has required BHCs to conduct stress tests since 1998, officials from several BHCs noted that their institutions had not conducted rigorous stress tests in the years prior to SCAP, a statement that is consistent with regulatory findings during the same period. To some degree, this lack of rigorous testing reflected the relatively good economic times that preceded the financial crisis. According to one credit rating agency and a BHC, stress test assumptions generally tend to be more optimistic in good economic times and more pessimistic in bad economic times. In addition, one BHC noted that it had conducted stress tests on and off for about 20 years, but usually only as the economy deteriorated. To address this issue, many BHC officials said that they have incorporated or are planning to incorporate more conservative inputs into their stress test models and are conducting more rigorous, firmwide stress testing more frequently. Although regulators’ guidelines have required for over 10 years that financial institutions use stress tests to assess their capacity to withstand losses, we found that regulators’ oversight of these tests had been limited. Horizontal examinations by the regulators from 2006 through 2008 identified multiple weaknesses in institutions’ risk management systems, including deficiencies in stress testing. Areas of weaknesses found during examination included that the BHCs’ stress testing of their balance sheets lacked severity, were not performed frequently enough, and were not done on a firmwide basis. Also, it was found that BHCs’ risk governance process lacked the active and effective involvement of BHC senior management and board of directors. The SCAP stress test and the financial crisis revealed the same shortcomings in BHCs’ risk management and stress testing practices. However, we previously found that regulators did not always effectively address these weaknesses or in some cases fully appreciate their magnitude. Specifically, regulators did not take measures to push forcefully for institutions to better understand and manage risks in a timely and effective manner. In addition, according to our discussions with some SCAP participants, oversight of these tests through routine examinations was limited in scope and tended to be discretionary. For example, regulators would review firms’ internal bank stress tests of counterparty risk and would make some suggestions, but reviews of these tests were done at the discretion of the individual supervisory team and were not consistently performed across teams. Even though BHCs have for many years performed stress tests to one degree or another, they have not been required to report the results of their testing to the Federal Reserve unless it specifically requested the information. The Federal Reserve recently issued a letter to the largest banking organizations outlining its view on good practices with respect to the use of stress testing in the context of internal capital adequacy assessment practices (ICAAP). For example, some areas highlighted in the letter include how frequent a stress test should be performed, the minimum time frame that the test should cover, documentation of the process, involvement of senior management and board of directors, and types of scenarios and risks to include in such tests. Some BHC officials believed that stress testing would become an integral part of future risk management practices and noted that SCAP helped them see how bank examiners would want them to stress their portfolios in the future. In anticipation of future action by regulators, many BHCs were designing at least part of their stress tests along the lines of SCAP. However, a few BHC officials hoped that future stress tests would not be performed in the same manner as SCAP, with the largest institutions tested simultaneously in a largely public setting, but rather as part of the confidential supervisory review process. Federal Reserve officials stated that going forward, stress tests will become a fundamental part of the agency’s oversight of individual banks and the financial system. As a result of SCAP, Federal Reserve officials stated that they are placing greater emphasis on the BHCs’ internal capital adequacy planning through their ICAAP. This initiative is intended to improve the measurement of firmwide risk and the incorporation of all risks into firms’ capital planning assessment and planning processes. In addition to enhanced supervisory focus on these practices across BHCs, stress testing is also a key component of the Basel II capital framework (Pillar 2). Under Pillar 2, supervisory review is intended to help ensure that banks have adequate capital to support all risks and to encourage that banks develop and use better risk management practices. All BHCs, including those adopting Basel II, must have a rigorous process of assessing capital adequacy that includes strong board and senior management oversight, comprehensive assessment of risks, rigorous stress testing and scenario analyses, validation programs, and independent review and oversight. In addition, Pillar 2 requires supervisors to review and evaluate banks’ internal capital adequacy assessments and monitor compliance with regulatory capital requirements. The Federal Reserve wants the large banks to conduct this work for themselves and report their findings to their senior management and boards of directors. According to Federal Reserve officials, for BHCs to satisfy the totality of expectations for ICAAP it may take 18 to 24 months, partly because the BHCs are taking actions to enhance practices where needed—including with respect to the use of stress testing and scenario analyses in internal capital assessments—and the Federal Reserve then needs to evaluate these actions across a relatively large number of BHCs. In addition, the Federal Reserve is finalizing guidance for examiners to assess the capital adequacy process, including stress testing, for BHCs. Examiners are expected to evaluate how BHCs’ stress tests inform the process for identifying and measuring risk and decisions about capital adequacy. Federal Reserve officials stated that examiners are expected to look closely at BHCs’ internal stress test methodologies and results. In a letter to BHCs, the Federal Reserve also emphasized that institutions should look at time frames of 2 or more years and considers losses firmwide. It also suggested that BHCs develop their own stress test scenarios and then review these scenarios and the results for appropriate rigor and quantification of risk. While these are positive steps, examiners do not have specific criteria for assessing the quality of these tests. For example, the Federal Reserve has not established criteria for assessing the severity of the assumptions used to stress BHCs’ balance sheets. The Federal Reserve officials stated that they intend to have technical teams determine the type of criteria that will be needed to evaluate these assumptions, but they are in the early planning stages. Development of such criteria will be particularly helpful in ensuring the effective implementation of the stress test requirements under the Dodd-Frank Act. Without specific criteria, Federal Reserve examiners will not be able to ensure the rigor of BHCs’ stress tests—an important part of the capital adequacy planning. Furthermore, the absence of such guidance could lead to variations in the intensity of these assessments by individual examiners and across regional districts. Following SCAP, regulatory and BHC officials we met with identified opportunities to enhance both the process and data inputs for conducting stress testing in the future. This would include processes for obtaining, analyzing, and sharing data and capabilities for data modeling and forecasting, which potentially could increase the Federal Reserve’s abilities to assess risks in the banking system. According to the Federal Reserve, an essential component of this new system will be a quantitative surveillance mechanism for large, complex financial institutions that will combine a more firmwide and multidisciplinary approach for bank supervision. This quantitative surveillance mechanism will use supervisory information, firm-specific data analysis, and market-based indicators to identify developing strains and imbalances that may affect multiple institutions, as well as emerging risks within specific institutions. This effort by the Federal Reserve may also improve other areas of supervision which rely on data and quantitative analysis, such as assessing the process used by BHC’s to determine their capital adequacy, forecasting revenue, and assessing and measuring risk, which is critical to supervising large, complex banks. Officials at the Federal Reserve told us that examiners should be analyzing BHC performances versus their stress test projections to provide insight into the agency’s loss forecasting approach. Moreover, Federal Reserve officials stated that they are always looking to increase their analytical capabilities, and they have recently implemented a new governance structure to address some of their management information infrastructure challenges. However, not enough time has passed to determine the extent to which such measures will improve banking supervision. In addition, some other deficiencies were found in the data reported to the Federal Reserve by BHCs using the Y-9C, as well as the Federal Reserve’s ability to analyze the risk of losses pertaining to certain portfolios that were identified during the SCAP stress test. This led the Federal Reserve to develop a more robust risk identification and assessment infrastructure including internally developed models or purchased analytical software and tools from data vendors. Going forward, such models and analytics would facilitate improved risk identification and assessment capabilities and oversight, including the oversight of systemic risk. Moreover, a risk identification and assessment system that can gauge risk in the banking sector by collecting data on a timelier basis is necessary to better ensure the safety and soundness of the banking industry. Specific areas in which data collection and risk identification and assessment could be enhanced include mortgage default modeling to include more analysis of nontraditional mortgage products, counterparty level exposures, country and currency exposures, and commodity exposures. An example of where the Federal Reserve used SCAP to significantly upgrade its ability to assess risks across large BHCs is the development of a system that allowed BHCs to submit their securities positions and market values at a fixed date and apply price shocks. This process was enhanced during SCAP to facilitate the stress analysis of securities portfolios held by SCAP BHCs. This system allowed the Federal Reserve to analyze approximately 100,000 securities in a relatively short time period. The Federal Reserve intends to continue using this database to receive and analyze updated positions from BHCs. With other portfolios, the Federal Reserve contracted with outside data and analytical systems providers. For multifamily loan portfolios, nonfarm loans, and nonresidential loans with a maturity beyond 2 years, all of which are subsets of commercial and industrial loans or commercial real estate portfolios, the Federal Reserve used internal models and purchased an outside vendor service that allowed it to estimate losses for these portfolios. For the remaining commercial portfolios, the Federal Reserve used different existing models found at both the Federal Reserve and Federal Reserve district banks and new models developed to meet the needs of SCAP. When analyzing BHCs’ mortgage portfolios, the consumer loans Supervisory Analytical and Advisory Team provided templates to the BHCs to collect granular data for such analysis, allowing the system to separate BHCs’ mortgage portfolios into much more granular tranches than would be possible using data from regulatory filings. The Federal Reserve further used data from various sources, including a large comprehensive loan-level database of most mortgages that have been securitized in the United States to assist in developing its own loss estimates to benchmark against the BHCs’ proprietary estimates. These examples point to enhancements in the ability to assess risks to individual firms and across the banking sector that resulted from the SCAP stress test. The Federal Reserve has made clear that it views many of these innovations in its ability to assess and model risks and potential losses as permanent additions to its toolkit, and has also recognized the need for more timely and granular information to improve its supervision of BHCs and other institutions. However, the extent to which these models and tools will be distributed across the Federal Reserve district banks and other federal banking regulators is unclear. In addition, as the stress test applied to trading positions was limited to those BHCs that held trading positions of at least $100 billion as of February 20, 2009, the Federal Reserve has not indicated that it will roll out its new system to BHCs with smaller trading positions. The Federal Reserve has taken steps to maintain and enhance the tools and data used during SCAP. Further, improving the Federal Reserve’s financial data collection and supervisory tools will require additional resources, training for bank examiners, coordination in the dissemination of new infrastructure across all U.S. financial regulators, and, according to a Federal Reserve governor, would benefit from relief from the Paperwork Reduction Act of 1980 as well. The Federal Reserve lacks a complete plan on how it will achieve permanent improvements in its risk identification and assessment infrastructure, but according to officials, such a plan is in development. The Federal Reserve has finalized a plan that describes a governance structure for overseeing large, complex financial organizations. The plan defines the roles and responsibilities of various committees and teams within the Federal Reserve that will carry out its supervisory responsibilities over these organizations. However, further planning is needed to incorporate lessons learned from SCAP for addressing data and modeling gaps that existed prior to the crisis and a structure for disseminating improvements to risk identification and assessment. Specifically, this plan will also be critical to addressing improvements to data and modeling infrastructure in supervising not only large financial holding companies but also smaller institutions. A fully developed plan would also consider how to disseminate data, models, and other infrastructure to the entire Federal Reserve System and bank regulatory agencies, as well as the newly established Financial Stability Oversight Council and Treasury’s Office of Financial Research. Without such a plan, the agency runs the risk of not optimizing its oversight responsibilities, especially in light of its new duties as the systemic risk regulator under the Dodd-Frank Act. Another critical lesson from SCAP was the need for robust coordination and communication among the regulators in examining large, complex financial institutions. Officials from the regulatory agencies and BHCs stated that the degree of cooperation among the SCAP regulators was unprecedented and improved the understanding of the risks facing the individual BHCs and the financial market. Such coordination and communication will become increasingly important as banking regulators increase their oversight role. Even with recent major reform to the financial regulatory structure, multiple regulatory agencies continue to oversee the banking industry, and regulators will need to prioritize efforts to promote coordination and communication among staff from these agencies so that emerging problematic issues affecting the financial industry are identified in a timely manner and effectively addressed. Going forward, based on our discussions with various SCAP participants and statements by Federal Reserve officials, including the chairman, the regulators’ experience with SCAP is anticipated to lead to the expanded use of horizontal examinations and multidisciplinary staff that will require extensive interagency coordination. Horizontal examinations may involve multiple regulators and underscore the importance of effective coordination and communication. Currently, regulators are conducting horizontal examinations of internal processes that evaluate the capital adequacy at the 28 largest U.S. BHCs. Their focus is on the use of stress testing and scenario analyses in ICAAP, as well as how shortcomings in fundamental risk management practices and governance and oversight by the board of directors for these processes could impair firms’ abilities to estimate their capital needs. Regulators recently completed the initial phase of horizontal examinations of incentive compensation practices at 25 large U.S. BHCs. As part of this review, each organization was required to submit an analysis of shortcomings or “gaps” in its existing practices relative to the principles contained in the proposed supervisory guidance issued by the Federal Reserve in the fall of 2009 as well as plans—including timetables—for addressing any weaknesses in the firm’s incentive compensation arrangements and related risk-management and corporate governance practices. In May 2010, regulators provided the banking organizations feedback on the firms’ analyses and plans. These organizations recently submitted revised plans to the Federal Reserve for addressing areas of deficiencies in their incentive compensation programs. In a June 2010 press release, the Federal Reserve noted that to monitor and encourage improvements in compensation practices by banking organizations, its staff will prepare a report after 2010 on trends and developments in such practices at banking organizations. Our prior work has found that coordination and communication among regulatory agencies is an ongoing challenge. For example, in 2007, OCC onsite examiners, as well as officials in headquarters, told us that coordination issues hampered the Federal Reserve’s horizontal examinations. Also, in 2007, a bank told us that it had initially received conflicting information from the Federal Reserve, its consolidated supervisor, and the OCC, its primary bank supervisor, regarding a key policy interpretation. Officials from the bank also noted that when the Federal Reserve collected information, it did not coordinate with OCC, the primary bank examiner of the lead bank, resulting in unnecessary duplication. We noted that to improve oversight in the future, regulators will need to work closely together to expedite examinations and avoid such duplications. Since the SCAP stress test was concluded, the following examples highlight ongoing challenges in coordination and communication: Officials from OCC and FDIC indicated that they were not always involved in important discussions and decisions. For example, they were not involved in the decision to reduce GMAC’s SCAP capital requirement, even though they were significantly involved in establishing the original capital requirement. Also, FDIC noted that it was excluded from such decision even though it is the primary federal bank regulator for GMAC’s retail bank (Ally Bank). The Federal Reserve held an internal meeting to discuss lessons learned from SCAP, but has yet to reach out to the other SCAP regulators. The OCC and FDIC told us that they had not met with the Federal Reserve as a group to evaluate the SCAP process and document lessons learned. As a result, the FDIC and OCC did not have an opportunity to share their views on what aspects of SCAP worked and did not work, as well as any potential improvements that can be incorporated into future horizontal reviews or other coordinated efforts. In the recent horizontal examinations, both FDIC and OCC noted that the interagency process for collaboration—especially in the initial design stages—was not as effective as it was for SCAP. OCC commented that more collaboration up front would have been preferable. Also, FDIC stated that the Federal Reserve did not include it in meetings to formulate aggregate findings for the horizontal examination of incentive compensation programs, and it experienced difficulties in obtaining aggregate findings from the Federal Reserve. The Federal Reserve commented that the FDIC was involved in the development of findings for those organizations that control an FDIC-supervised subsidiary bank and that FDIC has since been provided information on the findings across the full range of organizations included in the horizontal review, the majority of which do not control an FDIC-supervised subsidiary bank. These continued challenges in ensuring effective coordination and communication underscore the need for sustained commitment and effort by the regulators to ensure the inclusion of all relevant agencies in key discussions and decisions regarding the design, implementation, and results of multiagency horizontal examinations. As the SCAP process has shown, active participation by all relevant regulators can strengthen approaches used by examiners in performing their supervisory activities. Without continuous coordination and communication, the regulators will miss opportunities to leverage perspectives and experiences that could further strengthen the supervision of financial institutions, especially during horizontal examinations of financial institutions. Publicly reporting a comparison of the actual performance of the SCAP BHCs and the estimated performance under a more adverse scenario provides insights into the financial strength of the nation’s largest BHCs. Senior Federal Reserve officials have publicly disclosed select aggregate information about the performance of the 19 BHCs consistent with the recommendation in our June 2009 report. Specifically, we recommended that the Federal Reserve consider periodically disclosing to the public the performance of the 19 BHCs against the SCAP estimates during the 2-year period. However, the Federal Reserve has yet to commit to completing a final analysis that compares the BHCs’ actual performance with the estimated performance under SCAP’s more adverse economic scenario for the entire 2-year period and making this analysis public. Such an analysis is important for the market and BHCs to assess the rigor of the stress test methodology. Publicly releasing the results also would allow the public to gauge the health of the BHCs that participated in SCAP, which is a strong proxy for the entire U.S. banking industry. And public disclosure of this analysis could act as a catalyst for a public discussion of the value of effective bank risk management and enhance confidence in the regulatory supervision of financial institutions. The public release of the stress test methodology and results helped improve market confidence in the largest BHCs during the recent financial crisis and provided an unprecedented window into bank supervision process. Subsequently, the Chairman of the Federal Reserve and a Federal Reserve governor have publicly stated that greater transparency should be built into the supervisory process and that feedback from the public could help increase the integrity of the supervisory process. Increased transparency can also augment the information that is available to investors and counterparties of the institutions tested and enhance market discipline. Despite these statements, the Federal Reserve and other bank regulators have yet to start a formal dialogue about this issue, nor have they developed a plan for integrating public disclosures into the ongoing supervisory process. Such a plan could detail the types of information that would benefit the markets if it were publicly released; the planned methodology for the stress tests, including assumptions; the frequency with which information would be made public; and the various means of disseminating the information. Taking into account the need to protect proprietary information and other market-sensitive information would be an important part of such a plan. While regulators will undoubtedly face challenges in determining how best to overcome skepticism about the potential effects on the financial markets of disclosing sensitive information on the financial health of banks, the Dodd-Frank Act requires that the Federal Reserve and certain banks publicly release a summary of results from periodic stress tests. Without a plan for enhancing the transparency of supervisory processes and practices, bank regulators may miss a significant opportunity to further strengthen market discipline and confidence in the banking industry by providing investors, creditors, and counterparties with useful information. The SCAP stress test shed light on areas for further improvement in the regulators’ bank supervision processes, including oversight of risk management practices at BHCs. Prior to SCAP, regulatory oversight of stress tests performed by the BHCs themselves was ineffective. Specifically, although regulators required stress tests, the guidelines for conducting them were more than a decade old, and the individual banks were responsible for designing and executing them. The Federal Reserve’s reviews of the internal stress tests were done at the discretion of the BHCs’ individual supervisory teams and were not consistently performed. Further, even though BHCs performed stress tests, they were not required to report the results of their stress testing to the Federal Reserve without a specific request from regulators. Post-SCAP, however, the Federal Reserve has stated that stress testing will now be a fundamental part of their oversight of individual banks. The Federal Reserve expects to play a more prominent role in reviewing assumptions, results, and providing input into the BHCs’ risk management practices. While the Federal Reserve has begun to take steps to augment its oversight, currently Federal Reserve examiners lack specific criteria for assessing the severity of BHCs’ stress tests. Without specific criteria, Federal Reserve examiners will not be able to ensure the rigor of BHCs’ stress tests. Furthermore, the absence of such criteria could lead to variations in the intensity of these assessments by individual examiners and across regional districts. The experience with SCAP also showed that regulators needed relevant and detailed data to improve oversight of individual banks and to identify and assess risks. As the Federal Reserve and the other regulators conduct more horizontal reviews, they will need a robust plan for quantitatively assessing the risk in the banking sector. Collecting timely data for the annual stress testing and other supervisory actions will be critical in order to better ensure the safety and soundness of the banking industry. The Federal Reserve has finalized a plan that describes a governance structure for overseeing large, complex financial organizations. However, further planning is needed to incorporate lessons learned from SCAP for addressing data and modeling gaps and a structure for disseminating improvements to risk identification and assessment. Further, efforts to improve the risk identification and assessment infrastructure will need to be effectively coordinated with other regulators and the newly established Financial Stability Oversight Council and Treasury’s Office of Financial Research in order to ensure an effective systemwide risk assessment. Without fully developing a plan that can identify BHCs’ risks in time to take appropriate supervisory action, the Federal Reserve may not be well- positioned to anticipate and minimize future banking problems and ensure the soundness of the banking system. Despite the positive coordination and communication experience of the SCAP stress test, developments since the completion of SCAP have renewed questions about the effectiveness of regulators’ efforts to strengthen their coordination and communication. For example, on important issues, such as finalizing GMAC’s SCAP capital amount, the Federal Reserve chose not to seek the views of other knowledgeable bank regulators. While the Dodd-Frank Act creates formal mechanisms that require coordination and communication among regulators, the experiences from SCAP point to the need for a sustained commitment by each of the banking regulators to enhance coordination and communication. In particular, ensuring inclusion of relevant agencies in key discussions and decisions regarding the design, implementation, and results of multiagency horizontal examinations will be critical. If regulators do not consistently coordinate and communicate effectively during horizontal examinations, they run the risk of missing opportunities to leverage perspectives and experiences that could further strengthen bank supervision. To gain a better understanding of SCAP and inform the use of similar stress tests in the future, we recommend that the Chairman of the Federal Reserve direct the Division of Banking Supervision and Regulation to: Compare the performance of the 19 largest BHCs against the more adverse scenario projections following the completion of the 2-year period covered in the SCAP stress test ending December 31, 2010, and disclose the results of the analysis to the public. To leverage the lessons learned from SCAP to the benefit of other regulated bank and thrift institutions, we recommend that the Chairman of the Federal Reserve in consultation with the heads of the FDIC and OCC take the following actions: Follow through on the Federal Reserve’s commitment to improve the transparency of bank supervision by developing a plan that reconciles the divergent views on transparency and allows for increased transparency in the regular supervisory process. Such a plan should, at a minimum, outline steps for releasing supervisory methodologies and analytical results for stress testing. Develop more specific criteria to include in its guidance to examiners for assessing the quality of stress tests and how these tests inform BHCs’ capital adequacy planning. These guidelines should clarify the stress testing procedures already incorporated into banking regulations and incorporate lessons learned from SCAP. Fully develop its plan for maintaining and improving the use of data, risk identification and assessment infrastructure, and requisite systems in implementing its supervisory functions and new responsibilities under the Dodd-Frank Act. This plan should also ensure the dissemination of these enhancements throughout the Federal Reserve System and other financial regulators, as well as new organizations established in the Dodd-Frank Act. Take further steps to more effectively coordinate and communicate among themselves. For example, ensuring that all applicable regulatory agencies are included in discussions and decisions regarding the development, implementation, and results of multiagency activities, such as horizontal examinations of financial institutions. We provided a draft of this report to the Federal Reserve, FDIC, OCC, OTS, and Treasury for review and comment. We received written comments from the Chairman of the Federal Reserve Board of Governors and the Assistant Secretary for Financial Stability. These comments are summarized below and reprinted in appendixes IV and V, respectively. We also received technical comments from the Federal Reserve, FDIC, OCC, and Treasury, which we incorporated into the report as appropriate. OTS did not provide any comments. In addition, we received technical comments from the Federal Reserve and most of the 19 SCAP BHCs on the accuracy of our tracking of revenues and losses in 2009 for each of the SCAP BHCs and incorporated them into the report as appropriate. In its comment letter, the Federal Reserve agreed with all five of our recommendations for building on the successes of SCAP to improve bank supervision. The Federal Reserve noted that our recommendations generally relate to actions it is currently undertaking or planning to take under the Dodd-Frank Act. It also cited that in coordination with FDIC and OCC, it would provide a public assessment of BHCs’ performance relative to the loss and preprovision net revenue estimates under the more adverse scenario, taking into account the limitations of such an analysis. For our remaining recommendations related to increased transparency, examiner guidance, risk identification and assessment, and coordination and communication of multiagency activities, the Federal Reserve generally noted that it has taken step in these areas and will continue to consult with the FDIC and OCC in implementing our recommendations and its new responsibilities under the Dodd-Frank Act. While our report recognizes the steps that the Federal Reserve has taken related to transparency, examiner guidance, risk identification and assessment, and coordination and communication of multiagency activities, these areas warrant ongoing attention. For example, as we note in the report, while the Federal Reserve is in the process of finalizing examination guidance for reviewing stress tests, examiners currently do not have specific criteria for assessing the severity of these tests nor have they coordinated with the other bank regulators. Until this guidance is completed, examiners will lack the information needed to fully ensure the rigor of BHCs’ stress tests, and the Board will not be able to fully ensure the consistency of the assessment by individual examiners. Our report also notes the positive coordination and communication experience of the SCAP stress test, but we continued to find specific instances since the completion of SCAP that have renewed questions about the effectiveness of regulators’ efforts to strengthen their coordination and communication. For instance, while the Federal Reserve included relevant agencies in key discussions and decisions regarding the design, implementation, and results of SCAP, we found that the Federal Reserve missed opportunities to include other bank regulators when planning more recent horizontal examinations. Treasury agreed with our report findings, noting that it appreciated our acknowledgment that SCAP met its goals of providing a comprehensive, forward-looking assessment of the balance sheet risks of the largest banks and increasing the level and quality of capital held by such banks. It further noted that the unprecedented public release of the stress test results led to an increase in the market confidence in the banking system, which aided in improving the capital adequacy of the largest banks. We are sending copies of this report to the appropriate congressional committees; Chairman of the Federal Reserve, the Acting Comptroller of Currency, Chairman of the FDIC, the Acting Director of the Office of the Thrift Supervision, and the Secretary of the Treasury. Also, we are sending copies of this report to the Congressional Oversight Panel, Financial Stability Oversight Board, the Special Inspector General for TARP, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions on the matters discussed in this report, please contact me at (202) 512-8678 or williamso@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this letter. GAO staff who made key contributions to this report are listed in appendix VI. The objectives of this report were to (1) describe the process used to design and conduct the stress test and participants views’ of the process, (2) describe the extent to which the stress test achieved its goals and compare its estimates with the bank holding companies’ (BHC) actual results, and (3) identify the lessons regulators and BHCs learned from the Supervisory Capital Assessment Program (SCAP) and examine how each are using those lessons to enhance their risk identification and assessment practices. To meet the report’s objectives, we reviewed the Board of Governors of the Federal Reserve System’s (Federal Reserve) The Supervisory Capital Assessment Program: Design and Implementation (SCAP design and implementation document) dated April 24, 2009, and The Supervisory Capital Assessment Program: Overview of Results (SCAP results document) dated May 7, 2009. We analyzed the initial stress test data that the Federal Reserve provided to each BHC, the subsequent adjustments the Federal Reserve made to these estimates, and the reasons for these adjustments. We reviewed BHC regulatory filings such as the Federal Reserve’s 2009 Consolidated Financial Statements for Bank Holding Companies—-FR Y-9C (Y-9C); company quarterly 10-Qs and annual 10-Ks; speeches and testimonies regarding SCAP and stress testing; BHCs’ presentations to shareholders and earnings reports; bank supervision guidance issued by the Federal Reserve, Office of the Comptroller of the Currency (OCC), and the Federal Deposit Insurance Corporation (FDIC); and documents regarding the impact of SCAP and the financial crisis and proposed revisions to bank regulation and supervisory oversight. To further understand these documents and obtain different perspectives on the SCAP stress test, we interviewed officials from the Federal Reserve, OCC, FDIC, and the Office of the Thrift Supervision, as well as members of the multidisciplinary teams created to execute SCAP. We also collected data from SNL Financial—a private financial database that contains publicly filed regulatory and financial reports, including those of the BHCs involved in SCAP—in order to compare the BHCs’ actual performance in 2009 against the regulators’ 2-year SCAP loss estimates and GAO’s 1-year pro rata loss estimates. To obtain additional background information regarding the tracking of the BHCs, perspectives on their performance, anticipated loan losses, and the success of SCAP in achieving its goals, we interviewed relevant officials (e.g., chief risk officers and chief financial officers) from 11 of the 19 BHCs that participated in the SCAP stress test. The BHCs we interviewed were the American Express Company; Bank of America Corporation; The Bank of New York Mellon Corporation; BB&T Corporation; Citigroup Inc.; GMAC LLC; The Goldman Sachs Group, Inc.; JPMorgan Chase & Co.; MetLife, Inc.; Regions Financial Corporation; and Wells Fargo & Company. We selected these BHCs to reflect differences in size, types of financial services provided, geographic location, primary bank regulator, and participation in the Troubled Asset Relief Program (TARP). In addition, we met with credit rating agency officials from the Standard and Poor’s Corporation, Moody’s Corporation, and Fitch Ratings Inc. for their perspective on SCAP and their own stress test practices. To more completely understand the execution of SCAP, we completed a literature search of stress tests conducted by others—for example, the Committee on European Banking Supervisors and the International Monetary Fund. We also reviewed relevant credit rating agency reports and the reports of other oversight bodies such as the Congressional Oversight Panel and the Special Inspector General for the Troubled Asset Relief Program on topics related to stress testing and TARP. We also reviewed our past work on the bank supervisory process and SCAP. In addition, to track the actual performance of the 19 BHCs, we collected data from several sources. We then compared the BHCs’ actual performance to the December 31, 2008, capital levels presented in SCAP and the projections made under the more adverse scenario for estimated losses for loans, securities (available for sale and held to maturity), trading and counterparty, and resources other than capital to absorb losses. Our primary source for SCAP estimates was the May 7, 2009, SCAP results document, which contained the estimates for each of the 19 BHCs and aggregate data for all BHCs. We also reviewed the confidential April 24, 2009, and May 5, 2009, presentations that the SCAP regulators made to each of the 19 BHCs to identify estimates of preprovision net revenue (PPNR) and changes in allowance for loan and lease losses (ALLL) for the 2 years ended 2010. Our primary source for the actual results at the BHCs was the Federal Reserve’s Y-9C. In doing so, we used the SNL Financial database to extract data on the Y-9C and the Securities and Exchange Commission forms 10-K and 10-Q. These data were collected following the close of the fourth quarter of 2009, the halfway point of the SCAP’s 2-year time frame. Since losses were not estimated on a quarter-by-quarter or yearly basis but projected for the full 2-year period, we assumed that losses and revenue estimates under the more adverse scenario were distributed at a constant rate across the projection period. Thus, we compared the actual 2009 year end values with half of the Federal Reserve’s 2-year SCAP projections. This methodology has some limitations because losses, expenses, revenues, and changes to reserves are historically unevenly distributed and loss rates over a 2-year period in an uncertain economic environment can follow an inconsistent path. However, the Federal Reserve, OCC, credit rating agencies, an SNL Financial analyst, and most of the BHCs we interviewed who are tracking performance relative to SCAP estimates are also using the same methodology. We assessed the reliability of the SNL Financial database by following GAO’s best practices for data reliability and found that the data was sufficiently reliable for our purposes. To confirm the accuracy of our BHC tracking data, we shared our data with the Federal Reserve and the 19 SCAP BHCs. We received comments and incorporated them as appropriate. Some of the data that we collected were not in a form that was immediately comparable to the categories used in the SCAP results, and we had to make adjustments in order to make the comparison. For tier 1 common capital, most asset categories, and resources other than capital to absorb losses, we had to find a methodology suited to aggregating these data so that we could compare it to the corresponding SCAP data. For example, net-charge offs for the various loan categories are broken out into more subcategories in the Y-9C than those listed in the SCAP results. In addition, we calculated “Resources Other than Capital to Absorb Losses” to correspond to the SCAP definition of PPNR minus the change in ALLL, which required obtaining data from multiple entries within the Y- 9C. When calculating noninterest expense we removed the line item for goodwill impairment losses because this item was not included in the SCAP regulators’ projections. We also used the calculation of a change in ALLL until December 31, 2009. But the SCAP regulators considered an increase in ALLL over the full 2-year period to be a drain on resources, because the provisions made to increase the ALLL balance would not be available to absorb losses during the 2-year SCAP time frame. This notion creates a problem in using the formula for 1-year tracking purposes because an increase in ALLL during 2009 would require provisions for that increase, but those added reserves could ultimately be used to absorb losses during 2010. To maintain consistency, our calculation considers ALLL increases during 2009 to be a drain on resources, but we recognize that this money could act as a resource to absorb losses rather than a drain on those resources. We faced an additional limitation pertaining to the ALLL calculation and a challenge with regard to the treatment of trading and counterparty revenues. In our review of SCAP documentation, we found that SCAP regulators used two different ALLL calculations—1 calculation for 4 of the BHCs that included a reserve for off-balance sheet items and another for the remaining 15 BHCs that did not include off-balance sheet reserves. The Federal Reserve confirmed that there were two different calculations that were not adjusted for consistency. In order to be consistent across the BHCs, we applied the same methodology that the regulators used for 15 of the BHCs to the 4 that remained. The treatment of trading and counterparty revenue created a challenge because the data in the Y-9C includes both customer derived revenue from transactions for BHCs that operate as broker-dealers and gains (or losses) from proprietary trading and certain associated expenses. These items are presented only in net form in the Y-9C. However, for the five BHCs (Bank of America Corporation; Citigroup, Inc.; Goldman Sachs Group, Inc.; JPMorgan Chase & Co.; and Morgan Stanley) that had their trading portfolios stressed, the trading and counterparty line is based on projections of gains (losses) from proprietary trading, but PPNR (specifically noninterest revenue) is based on gains from customer derived revenue from transactions for BHCs that operate as broker-dealers. Because we could not segregate these items based on the Y-9C, we have included the net amount in both the trading and counterparty and noninterest income line items. This means that the net amount of the trading gains or losses as reported in the Y-9C are included in two places in our tracking table for those five BHCs. For the remaining 14 BHCs, we included the entire line item in noninterest income, as that is where it was located in the SCAP projections. Table 6 shows the items we used to calculate tier 1 capital, asset losses, PPNR, and ALLL as of December 31, 2009 and the specific sources we used. We also included specific references to the sources we used. Some elements within the table required a more detailed aggregation or calculation and are therefore explained further in tables 7 and 8 below. For reporting these capital measures and asset balances for the year ending December 31, 2008, we generally relied on the figures published in various SCAP documents. Table 7 shows our methodology for calculating tier 1 common capital, including the part of the Y-9C in which the data can be found. Currently, there is no defined regulatory method for calculating tier 1 common capital, and it is not a required data field for BHCs to file in their Y-9C submissions. As a result, we developed a formula consistent with the Federal Reserve’s by reviewing the guidance available in the SCAP design and implementation and SCAP results documents and consulting with SNL Financial regarding its methodology. Table 8 provides a crosswalk for the asset classification we used to group the various charge-off categories listed in the Y-9C. To ensure additional comparability with SCAP, we attempted to identify any unique circumstances that could skew the results. For example, after we shared our initial tracking estimates with the 19 BHCs, one BHC had identified an issue with our calculation of tier 1 common capital that resulted from the way information is reported on the Y-9C. After discussing the issue with the BHC and verifying their explanation, we adjusted our calculation to more accurately reflect their position. Another BHC also had a one-time charge that had been included in the “Other” loss category, and we decided to segregate this item as a separate line item. We have also submitted our tracking spreadsheet to the Federal Reserve and to each BHC to give them an opportunity to provide input and ensure the accuracy and comparability of our numbers. Appropriate adjustments to 2009 numbers based on information received from the Federal Reserve and individual BHCs are noted, where applicable, in the tables in appendix III. Some items that impact precise comparisons between actual results and the pro rata estimates are disclosed in our footnotes, rather than as adjustments to our calculations. For example, the stress test was applied to loan and other asset portfolios as of December 31, 2008, without including a calculation for ongoing banking activities. Because the Y-9C data includes ongoing activity as of the date of the report, the actual results are slightly different than the performance of the stressed assets as the BHCs were treated as liquidating concerns rather than going concerns in the SCAP stress test. Distinguishing between the gains (losses) from legacy assets and those that resulted from new assets is not possible using public data. Other examples are that SCAP did not include the impact of the owned debt value adjustment or one-time items (occurring subsequent to SCAP) in their projections of PPNR. As credit default swap spreads narrowed in 2009, liability values increased at most banks, causing a negative impact on revenue at those banks that chose to account for their debt at fair value; but these losses were not included in the SCAP estimates. One-time items, such as sales of business lines, were also not included in the SCAP estimates of PPNR, as these events occurred subsequent to the stress test and, in part, could not be fully predicted as a part of SCAP. Rather than remove the losses from the owned debt value adjustments and the gains (or losses) due to one-time items from the BHCs’ 2009 PPNR results, we disclosed the amounts in footnotes for the applicable BHCs. We chose this treatment so that PPNR would reflect actual results at the BHCs, while still disclosing the adjustments needed for more precise comparability to SCAP. We identified the TARP status of each of the 19 BHCs that participated in SCAP by reviewing data from the Treasury’s Office of Financial Stability’s TARP Transactions Report for the Period Ending September 22, 2010 (TARP Transactions Report) and the SCAP results document. We used the SCAP results document to identify BHCs that were required to raise capital. The TARP Transactions Report, was then used to identify the program under which TARP funds were received (if any), the amount of funds received, capital repayment date, amount repaid, and warrant disposition date and to determine whether the warrants were repurchased or sold by Treasury in a public offering. To gain a better understanding of future potential losses, we determined the percentage of BHCs’ total loans that are either nonaccrual or more than 90 days past due using Y-9C data from the SNL Financial database. We used quarterly data for the period 2007 through 2009 on nonaccrual loans and past due balances of more than 90 days, for each of the BHCs. We aggregated the data into the same six loan categories used in SCAP: first-lien mortgages, second/junior-lien mortgages, commercial and industrial loans, commercial real estate loans, credit card balances, and “Other.” (See tables 8 and 9 for details.) Once the data were aggregated, we divided that data by the applicable total loan balance for each category at each point in time (i.e., quarterly basis). One limitation is that Y-9C data were not available for all periods for four of the BHCs (American Express Company; GMAC LLC; The Goldman Sachs Group, Inc.; and Morgan Stanley) because they had recently became BHCs. As a result, we did not include these BHCs in the calculation during those periods where their Y- 9Cs were not available (fourth quarter of 2008 and earlier for all except GMAC LLC, which also did not have a Y-9C in the first quarter of 2009). We collected Y-9C data from the SNL Financial database to calculate the loan loss rates across BHCs with more than $1 billion of assets and compare the 19 BHCs with the indicative loss rates provided by the SCAP regulators. We used annual data for the year ended December 31, 2009, on loan charge-offs. We also used average total loan balances. In the Y-9C total loan balances were categorized somewhat differently from charge- offs. Table 9 provides a crosswalk for the asset classification. We aggregated loan balance data into the same categories that were used in the indicative loss rate table in SCAP: first-lien mortgages, prime mortgages, Alt-A mortgages, subprime mortgages, second/junior lien mortgages, closed-end junior liens, home equity lines of credit, commercial and industrial loans, commercial real estate loans, construction loans, multifamily loans, nonfarm nonresidential loans, credit card balances, other consumer, and other loans. Once the data were aggregated into these categories, we divided the net charge-offs by the applicable average loan balance. This calculation showed the loss rate for each category (e.g., first-lien mortgages and commercial real estate) for the year ended December 31, 2009. This methodology was applied to calculate the loss rates for the 19 SCAP BHCs and all BHCs with more than $1 billion of assets, respectively. Because those institutions had recently converted to being BHCs, Y-9C data on loan balances was not available for the fourth quarter of 2008 for American Express Company; The Goldman Sachs Group, Inc.; and Morgan Stanley, and was not available for GMAC LLC for both the first quarter of 2009 and the fourth quarter of 2008. Therefore, we approximated the loan balances in these periods for GMAC LLC and American Express Company based on their Form 10-Q for these time periods. Because The Goldman Sachs Group, Inc. and Morgan Stanley have considerably smaller loan balances, in general, than the other BHCs; the fourth quarter of 2008 balance was not approximated for these BHCs. Instead, the average loan balance was simply based on the available data (e.g., first quarter of 2009 through fourth quarter of 2009). We conducted this performance audit from August 2009 to September 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Twelve of the 19 bank holding companies (BHC) that participated in the Supervisory Capital Assessment Program (SCAP) had redeemed their Troubled Asset Relief Program (TARP) investments and had their warrants disposed of as of September 22, 2010, and most of them were not required to raise capital under SCAP (table 10). Six of the 19 BHCs tested under SCAP have not repaid TARP investments or disposed of warrants, and one, MetLife, Inc., did not receive any TARP investments. BHCs participating in SCAP must follow specific criteria to repay TARP funds. In approving applications from participating banks that want to repay TARP funds, the Federal Reserve considers various factors. Some of these factors include whether the banks can demonstrate an ability to access the long-term debt market without relying on the Federal Deposit Insurance Corporation’s (FDIC) Temporary Liquidity Guarantee Program and whether they can successfully access the public equity markets, remain in a position to facilitate lending, and maintain capital levels in accord with supervisory expectations. BHCs intending to repay TARP investments must have post repayment capital ratios that meet or exceed SCAP requirements. Table 11 shows the names, location, and total assets as of December 31, 2008, of the 19 bank holding companies (BHC) subject to the Supervisory Capital Assessment Program (SCAP) stress test that was conducted by the federal bank regulators in the spring of 2009. The stress test was a forward-looking exercise intended to help federal banking regulators gauge the extent of the additional capital buffer necessary to keep the BHCs strongly capitalized and lending even if economic conditions are worse than had been expected between December 2008 and December 2010. The following tables (12 through 30) compare the 2009 performance of the 19 BHCs involved in SCAP to the 2-year SCAP estimates and the GAO 1- year pro rata estimates for the more adverse economic scenario. Specifically, these tables include comparison of actual and estimates of losses and gains associated with loans, securities, trading and counterparty, resources, preprovision net revenue (PPNR), and allowance for loan and lease losses (ALLL). These tables also include a comparison of actual capital levels at December 31, 2009, and December 31, 2008. Totals may not add due to rounding. For a more detailed explanation of the calculations made in constructing this analysis, see appendix I. Daniel Garcia-Diaz (Assistant Director), Michael Aksman, Emily Chalmers, Rachel DeMarcus, Laurier Fish, Joe Hunter, William King, Matthew McDonald, Sarah M. McGrath, Timothy Mooney, Marc Molino, Linda Rego, and Cynthia Taylor made important contributions to this report.
The Supervisory Capital Assessment Program (SCAP) was established under the Capital Assistance Program (CAP)--a component of the Troubled Asset Relief Program (TARP)--to assess whether the 19 largest U.S. bank holding companies (BHC) had enough capital to withstand a severe economic downturn. Led by the Board of Governors of the Federal Reserve System (Federal Reserve), federal bank regulators conducted a stress test to determine if these banks needed to raise additional capital, either privately or through CAP. This report (1) describes the SCAP process and participants' views of the process, (2) assesses SCAP's goals and results and BHCs' performance, and (3) identifies how regulators and the BHCs are applying lessons learned from SCAP. To do this work, GAO reviewed SCAP documents, analyzed financial data, and interviewed regulatory, industry, and BHC officials. The SCAP process appeared to have been mostly successful in promoting coordination, transparency, and capital adequacy. The process utilized an organizational structure that facilitated coordination and communication among regulatory staff from multiple disciplines and organizations and with the BHCs. Because SCAP was designed to help restore confidence in the banking industry, regulators took unusual steps to increase transparency by releasing details of their methodology and sensitive BHC-specific results. However, several participants criticized aspects of the SCAP process. For example, some supervisory and bank industry officials stated that the Federal Reserve was not transparent about the linkages between some of the test's assumptions and results. But most of the participants in SCAP agreed that despite these views, coordination and communication were effective and could serve as a model for future supervisory efforts. According to regulators, the process resulted in a methodology that yielded credible results. By design, the process helped to ensure that BHCs would be capitalized for a potentially more severe downturn in economic conditions from 2009 through 2010. SCAP largely met its goals of increasing the level and quality of capital held by the 19 largest U.S. BHCs and, more broadly, strengthening market confidence in the banking system. The stress test identified 9 BHCs that met the capital requirements under the more adverse scenario and 10 that needed to raise additional capital. Nine of the 10 BHCs were able to raise capital in the private market, with the exception of GMAC LLC, which received additional capital from the U.S. Department of the Treasury (Treasury). The resulting capital adequacy of the 19 BHCs has generally exceeded SCAP's requirements, and two-thirds of the BHCs have either fully repaid or begun to repay their TARP investments. Officials from the BHCs, credit rating agencies, and federal banking agencies indicated that the Federal Reserve's public release of the stress test methodology and results in the spring of 2009 helped strengthen market confidence. During the first year of SCAP (2009), overall actual losses for these 19 BHCs have generally been below GAO's 1-year pro rata loss estimates under the more adverse economic scenario. Collectively, the BHCs experienced gains in their securities and trading and counterparty portfolios. However, some BHCs exceeded the GAO 1-year pro rata estimated 2009 losses in certain areas, such as consumer and commercial lending. Most notably, in 2009, GMAC LLC exceeded the loss estimates in multiple categories for the full 2-year SCAP period. More losses in the residential and commercial real estate markets and further deterioration in economic conditions could challenge the BHCs, even though they have been deemed to have adequate capital levels under SCAP. This report recommends that the Federal Reserve complete a final 2-year SCAP analysis, and apply lessons learned from SCAP to improve transparency of bank supervision, examiner guidance, risk identification and assessment, and regulatory coordination. The Federal Reserve agreed with our five recommendations and noted current actions that it has underway to address them. Treasury agreed with the report's findings.
Available data showed that case dispositions and processing times in disciplinary cases during the period of January 1, 1996, through June 30, 1998, differed for SES employees and lower-level, or general schedule (GS), staff. In addition, a 1997 IRS internal study found that actions taken against lower-level employees more closely conformed to the IRS table of penalties than actions taken against higher-graded employees. However, because of dissimilarities in the types of offenses and incomplete case files, these data do not necessarily prove disparate treatment. Agencies must consider many factors, such as the nature and seriousness of the offense; the employee’s job level and type of employment; whether the offense was intentional, technical, or inadvertent; the employee’s past disciplinary record; and the notoriety of the offense or its impact upon the reputation of the agency, in deciding what penalty, if any, should be imposed in any given case. IRS recognized that problems have hindered the processing and resolution of employee misconduct cases and has begun revamping its disciplinary systems. For the period we studied, IRS tracked disciplinary cases for GS and SES employees in different systems. The Office of Labor Relations (OLR), which is the personnel office for non-SES staff, handled GS cases. It tracked these cases in the Automated Labor and Employee Relations Tracking System (ALERTS), although IRS officials told us that ALERTS data were often missing or incomplete. The Office of Executive Support (OES), which is the personnel office for IRS executives, handled SES cases. Although ALERTS was supposed to also track SES cases, OES tracked SES cases by using a log and monthly briefing reports. The monthly briefing reports were used to inform the Deputy Commissioner about the status of cases. We selected the cases for our study of disciplinary actions for SES and lower-level staff as follows: For GS cases, we used ALERTS data for 22,025 cases received in, or closed by, OLR between January 1, 1996, and June 30, 1998. For SES cases, our information came from two sources: (1) a 70-case random sample of SES nontax misconduct case files that were active between January 1, 1996, and June 30, 1998; and (2) for the same time period, 43 other SES nontax cases reported either in the logs or as “overaged” SES cases in the monthly briefing reports. In total, we looked at 113 cases involving 83 SESers. Unless otherwise noted, all SES statistics presented in this section are based on the random sample. See appendix I for more information on how we selected the cases for our study. We were unable to make many meaningful statistical comparisons between SES and GS employee misconduct cases for three reasons. First, we were able to collect more detailed data through our SES file review than from the ALERTS database used for GS cases. This was particularly true regarding dates on which important events occurred. As a result, we could not compare average processing time at each phase of the disciplinary process, although we were able to compare processing times from case receipt through case closure. Second, the level of detail and accuracy of ALERTS data varied widely. Some IRS regions historically took ALERTS data entry more seriously than others did, according to an IRS memorandum, and cases contained varying levels of detail about case histories, issues, facts, and analyses. ALERTS had few built-in system controls to ensure data integrity. Instead, IRS relied on managers to ensure the accuracy of their subordinates’ work. Third, some data were missing for the majority of the cases tracked in ALERTS. For example, we could not analyze the frequency with which final dispositions were less severe than proposed dispositions because both pieces of information were available for only about 13 percent of the ALERTS cases. Because officials said that ALERTS was OLR’s means of recording information on lower-level disciplinary cases, we used it to the extent that it had information comparable to what we collected on SES cases. Available data showed that processing time and frequency and type of case dispositions differed for SES and lower-level staff. On average, from OES’ or OLR’s receipt of a case until case closure, SES cases, on the basis of our 70-case random sample, lasted almost a year (352 days) and lower-level cases lasted less than 3 months (80 days). We estimated that the largest difference between SES and GS case dispositions occurred in the closed without action (CWA) and clearance categories. As shown in table 1, the dispositions in 73 percent of SES cases were CWA or clearance, versus 26 percent for GS cases. CWA is to be used to close a case when the evidence neither proves nor disproves the allegation(s). A disposition of clearance is to be used when the evidence clearly establishes that the allegations are false. In practice, neither disposition results in a penalty. The actual breakdown between the two dispositions is as follows: for SES cases, 61 percent were CWA and 12 percent were clearance; for GS cases, 24 percent were CWA and 2 percent were clearance. Table 1 outlines in order of severity the frequency with which available data indicate that various dispositions were imposed for SES and lower- level staff. SES data are based on the 56 closed cases in our 70-case sample. GS data are based on 15,656 closed cases in ALERTS. Ninety-five- percent confidence intervals for the SES data are presented to more accurately portray our findings. Using these confidence intervals, the rates of occurrence differed between SES and GS cases for dispositions of clearance and CWA, reprimand, suspension, and other. However, using 95- percent confidence intervals and eliminating the CWA or clearance category from the analysis, the rates of occurrence between SES and GS cases were similar for all dispositions, except oral or written counseling and retired/resigned. In any case, we will discuss later in this report that differences in dispositions of SES and GS cases do not necessarily mean that the dispositions were inappropriate or that disparate treatment occurred. We also analyzed disciplinary actions for an additional 43 SES cases. Because these cases were not randomly selected, the results may not be representative. Of the 43 cases, we found 9 in the more serious categories—6 instances of counseling, 1 reprimand, 1 suspension, and 1 removal. As further detailed in the upcoming section of this report on alleged case- processing delays by the Deputy Commissioner, SES cases took a long time to close for many reasons. These reasons included poor case-tracking procedures, inadequate file management, and poor communication among agency officials involved in the disciplinary process. We do not know to what extent, if any, these difficulties contributed to differences in processing times between SES and GS cases. Many factors can affect the discipline imposed in a particular case. These factors include the nature and seriousness of the offense; the employee’s job level and type of employment; whether the offense was intentional, technical, or inadvertent; the employee’s past disciplinary record; and the notoriety of the offense or its impact upon the reputation of the agency. Collectively, these factors are components of what is known as the Douglas Factors, and they must be considered in determining the appropriate penalty in a case. See appendix II for a listing of the Douglas Factors. Not all of the Douglas Factors will be pertinent in every case, and, while some factors will weigh in the employee’s favor (mitigating factors), others may weigh against the employee (aggravating factors). IRS officials told us that lower-level actions tend to be more straightforward than SES actions, with fewer mitigating factors. Since mitigating factors tend to reduce the level of discipline imposed, this could partially explain why penalties might be imposed differently in lower-level cases than in SES cases. We found that allegations against SES employees were usually reported to a hotline, the Department of the Treasury’s Office of Inspector General (OIG), or the IRS Inspection Service. Because complaints against SES employees can be anonymous, this anonymity can affect IRS’ ability to follow up on a complaint or investigate it thoroughly. In contrast, IRS officials told us that GS cases were generally filed by managers about their subordinates. In these cases, the complainant was known and generally provided concrete evidence to support the allegation. Further, typical issues surrounding lower-level cases may be less complicated or easier to successfully investigate than those involving SES employees. Table 2 outlines in more detail the most common issues in SES and lower-level staff cases. SES data are based on our 70-case sample. GS data are based on 22,025 cases in ALERTS. We subjectively classified the issues in SES cases, and our classifications may not be precise. Overall, we found that the most common issue in SES cases was prohibited personnel practices, while time and attendance was the most common issue in GS cases. In 1994, in response to an internal IRS study reporting a perception that managers received preferential treatment in disciplinary matters, IRS created a table of penalties, the Guide for Penalty Determinations. The purpose of the guide was to ensure that decisions on substantiated cases of misconduct were appropriate and consistent throughout IRS. In 1997 and 1998, IRS studied the effect of the guide on GS and SES employees and found that actions taken against lower-graded employees more closely conformed to the guide than those taken against higher-graded employees (see table 3); for GS employees overall, 91 percent of disciplinary actions conformed to the guide, versus 74 percent for SES employees; when disciplinary actions did not conform to the guide, the actions were below the guide’s prescribed range 93 percent of the time for GS employees overall, versus 100 percent of the time for GS-13 through GS-15 and SES employees; and if admonishments were included as part of reprimands, conformance with the guide approached 100 percent for GS-13 through GS-15 employees. The IRS study and IRS officials agreed that the guide had limitations and no longer met IRS needs. Specifically, the guide covered all employees but did not address statutory and regulatory limitations that restricted management’s ability to impose disciplinary suspensions on SES employees. IRS officials said that governmentwide, there was no level of discipline available for SES employees that was more severe than a reprimand but less severe than a suspension of at least 15 days. In contrast, GS employees could have received suspensions of 14 days or less. While the guide prescribed a penalty range of “reprimand to suspension,” the only option for SES employees, because of the statutory limitations against suspensions of less than 15 days, was a reprimand if management wished to impose a penalty, but not the harshest available penalty. IRS officials also told us that in certain cases, they might have imposed discipline in between a reprimand and a 15-day suspension had they had the option to do so. According to IRS officials, IRS’ 1995 attempt to have the Office of Personnel Management deal with this issue was unsuccessful. Statutory and regulatory requirements could partially explain why reprimands might have been imposed when a harsher disciplinary action might have seemed more appropriate. Applying to employees at different levels, the IRS penalty guide was constructed with very broad recommended discipline ranges to provide for management discretion. However, one IRS study pointed out that, in some instances, this rendered the guide useless (e.g., when the penalty range was “reprimand to removal”). IRS created a disciplinary review team in September 1998. Among other things, the team was to develop an action plan that addressed case handling, complaint systems, review and revise IRS’ Guide for Penalty Determinations; and develop a process to review and monitor complaints. As of March 1999, the team was proposing a new integrated IRS complaint process. Its intent was to overcome problems with complaint processing systems’ not (1) communicating or coordinating with each other, (2) capturing the universe of complaints, (3) specifically tracking or accurately measuring complaints, and (4) following up on complaints to ensure that appropriate corrective action had been taken. The team was proposing a 26-person Commissioner’s Review Group to, among other things, manage and analyze complaints sent to the Commissioner of Internal Revenue, monitor other IRS complaint systems, and coordinate with the systems’ representatives. The team was also redesigning the penalty guide. On the basis of our review of SES cases, we did not find a case in which an individual who was ineligible to retire at the time an allegation was filed, retired while the case was pending with the Deputy Commissioner. However, we found cases that spent up to 4 years at this stage in the disciplinary process and cases that stalled at various points throughout the process. Although OES’ goal for closing an SES case was 90 days, on the basis of our random sample, cases averaged almost 1 year for OES to close. Further, IRS had poor case-tracking procedures, inadequate file management, missing and incomplete files, and poor communication among officials involved in the disciplinary process. Because IRS’ 1990 and 1994 written SES case-handling procedures were out of date, IRS officials described the operable procedures to us. During the period covered by our review, OES handled SES misconduct cases. Its goal for closing a case was 90 days from its receipt of a case. Once OES received a case, it was to enter it into ALERTS, although it did not always do this, and prepare a case analysis. The case analysis and supporting documents were then to be forwarded to the appropriate Regional Commissioner, Chief, or Executive Officer for Service Center Operations, who was to act as the “recommending official.” Within 30 days, the recommending official was to review the case with the help of local labor relations experts, develop any additional facts deemed appropriate, and return a case report to OES, including a recommendation for disposition. If OES disagreed with the report for any reason, it was to include a “statement of differences” in its case analysis. OES was to forward the field report and the OES analysis to the Deputy Commissioner’s office for concurrence or disapproval. If the Deputy Commissioner concurred with the proposed disposition, the recommending official could take action. If the Deputy Commissioner did not approve, he could impose a lesser disposition or return the case to OES for further development. IRS executive case-handling procedures did not define a time period within which the Deputy Commissioner was to act on case dispositions. We collected information on SES cases from two sources: (1) the five specific cases mentioned during the April 1998 Senate Finance hearings, and (2) a 70-case random sample of the SES misconduct case files as previously described, plus 43 more cases from OES tracking logs and monthly briefing reports, for a total of 113 cases. These 113 cases involved 83 individuals. Again, see appendix I for more details on how we selected the cases to study. Of the 113 SES cases we reviewed, we did not find a single instance in which an individual who was ineligible to retire at the time the allegation was filed, retired while the case was pending with the Deputy Commissioner. Overall, of the 83 individuals involved in the 113 cases, 25 people, or 30 percent, had retired from IRS by December 31, 1998. Of these 25 people, 13 retired before their cases were closed or the cases were closed because the individuals retired. At the time of retirement, cases for 2 of the 13 people were pending in the Deputy Commissioner’s office, but both of these individuals had been eligible to retire at the time the complaints against them were originally filed. Cases for the remaining 11 of the 13 people either were still being investigated or were pending in OES, that is, they had not yet reached the Deputy Commissioner’s office. In doing our analyses, we focused on actual retirements and did not reach general conclusions about eligibility to retire. As table 4 shows, of the five executive cases mentioned during the April 1998 hearings, two of the executives were already eligible to retire when the allegations against them were filed. We refer to the executives in the five cases as Executives A through E. One of the two eligible executives— Executive B—was still an IRS employee as of September 30, 1998. The other—Executive D—retired while, in OES’ view, his case was pending in the Deputy Commissioner’s office. Of the three individuals who were not eligible to retire when the allegations against them were filed, one retired 16 months after his case was closed. The other two executives, one of whom was not found culpable, were still employed by IRS as of September 30, 1998. IRS records showed that the misconduct cases spent from 2 months to 4 years at the Deputy Commissioner level. See appendix III for more details about the five cases. As shown in table 5, on the basis of our random sample, the total processing time for SES misconduct cases averaged 471 days (almost 16 months) from the date the complaint was filed until the case was closed. Most of this time involved OES case analysis and referral to the recommending official for inquiry (214 days, or about 7 months) and investigation by the recommending official (124 days, or more than 4 months). These averages exceeded IRS’ most recent, written case- processing time guidelines, which were 14 and 30 days, respectively. The average total time from OES’ receipt of a case to the case’s closure was 352 days, compared to a goal of 90 days. As previously mentioned, there was no targeted time frame for the Deputy Commissioner’s review. However, on average, cases spent 42 days at this level. In addition, we found that some cases took a particularly long time to be resolved. For example, in our sample cases, from the date the complaint was filed to the date the case was closed, 8 cases took at least 2 years, an additional case took more than 3 years, and still another case took longer than 4 years. In 1992, IRS acknowledged that the best way to prevent employees from retiring before their cases closed was to improve timeliness. Although we found no cases in which individuals ineligible to retire when allegations were made retired with the case pending before the Deputy Commissioner, the longer it takes to close cases, the more likely that individuals would retire or resign while their cases were open. Our review and a recent IRS task force report identified numerous problems with the executive misconduct case-handling process. These problems included inadequate staffing, poor communication, inaccurate and incomplete records and files, outdated procedures, conflicts over proposed case dispositions, and internal disagreement about case investigations. These problems contributed to the lengthy case-processing times in the available data and case files. According to IRS officials, IRS’ downsizing a few years ago significantly affected OES and field staff resources. From late 1996 through early 1998, OES devoted only one staff year to executive misconduct cases. The staff year was divided between the Director and one employee. In mid-1998, the Director moved to Labor Relations, and the employee retired, leaving OES with no resident expertise. Previously, four or five case experts handled executive cases. In total, according to an IRS official, the office was understaffed for about 18 months, which caused a case backlog. However, the new Chief of OES was able to bring the staffing level up to eight, including two individuals with employee relations backgrounds to act as team leaders. She also used detailees and a technical contractor to reduce the case backlog. The understaffing issue also extended to the labor relations functions in the regions. These functions supplied the staff that recommending officials used to investigate misconduct cases. When the regional offices were consolidated several years ago, they lost their labor relations functions as well as a central repository for program administration and expertise. IRS did not enter executive misconduct cases into ALERTS from late 1996 through early 1998. IRS officials told us they did not have enough labor relations experts to properly track cases on ALERTS because the system required significant detail about each case. Instead, it tracked these cases using logs and monthly briefing reports. OES also used the briefing reports to inform the Deputy Commissioner of case status. IRS officials acknowledged that these independent systems often disagreed with each other about the details and status of the cases. Our review found that poor communication among IRS support staff, the Deputy Commissioner’s office, IRS Inspection, and OIG contributed to case-processing delays. As previously mentioned, the Deputy Commissioner considered one case to be closed with the transfer of the individual, but OES was not told to formally close the case. In another instance, the Deputy Commissioner told us that he inadvertently allowed a case to be lost in the system. Case information in the ALERTS, OES, and IRS Inspection tracking systems was also found to be inconsistent and inaccurate in many instances. For example, according to IRS officials, cases recorded as “overaged” in the IRS Inspection system were recorded as “closed” by the field offices, leading to confusion among officials as to whether a case was open or closed and where a particular case was pending at a given time. An internal IRS study found that many cases had timeliness problems, especially cases that had been referred to IRS from OIG. In certain instances, cases stayed at a particular phase in the process for months before an OES employee inquired about their status. In one instance, for nearly 2 years, OES did not follow up on the status of an OIG investigation. IRS officials told us that these problems occurred primarily because IRS had no contact person for OIG cases before early 1997, and because OES lacked staff resources to properly monitor cases. Our review identified several concerns surrounding IRS’ files, records, and miscellaneous procedures for executive misconduct cases. Examples included the following: Poor filing. Executive misconduct cases were to be filed alphabetically. Several times, we happened upon misfiled cases only because we went through all of the files to draw our sample. Also, in one instance, a closing letter addressed to the executive involved in a case was filed instead of being mailed to the individual. It took nearly 5 months for the error to be discovered and rectified. Missing files and records. We requested eight case files for our review that IRS could not provide, even after more than 4 months. Incomplete files. In some cases, the case files did not document important information, such as dates, transmittal memorandums, and final case dispositions. In one instance, the case file consisted of a single E-mail message. The case was serious enough to warrant suspending the individual. Noncompliance with procedures. In several instances, field staff imposed discipline before the Deputy Commissioner had concurred with the proposed action. Several files contained memorandums to the field staff, reminding them not to impose discipline or close a case until the Deputy Commissioner had indicated his approval. Further, as mentioned in appendix III, a premature disposition occurred in one of our case studies. According to two 1998 IRS internal studies, outdated procedures led to inefficient case handling and confusion as to who was responsible for what. Because of regional and district consolidations and a national office restructuring, the written, 1994 case-handling procedures no longer accurately depicted the proper flow of cases. Although procedures were informally adjusted and work kept moving, it was not efficient. As a result, ad hoc procedures were developed in each region, leading to communication problems between the regions and the national office. IRS recognized this problem in March 1998 and completed a draft of new case procedures in July 1998. During that time, the Internal Revenue Service Reform and Restructuring Act of 1998 established the Treasury Inspector General for Tax Administration (TIGTA), and procedures were again revised to accurately depict TIGTA’s role. According to IRS officials, draft procedures were sent to IRS field offices for comment in mid-March 1999. Another factor contributing to case-processing delays was internal disagreement surrounding the proper level of discipline to impose in particular cases. In our case studies, we noted instances in which internal disputes significantly lengthened case-processing times. OES officials told us that this situation occurred much more frequently in the past. However, over the past few years, IRS has made a concerted effort to resolve disputes below the Deputy Commissioner level. As shown in table 6, in the cases involving Executives C and D, disagreements were serious. In fact, they warranted formal statements of differences. In each of these two cases, OES endorsed a stronger level of discipline than that suggested by the recommending official. In the case of Executive E, IRS officials disagreed among themselves over the facts of the case. Although an IRS Internal Security investigation confirmed the allegations, the Deputy Commissioner was not comfortable with the allegations’ correctness. However, he eventually agreed that the allegations had some merit. The Deputy Commissioner issued a letter of counseling 5-½ years after the complaint was filed, which was more than 4 years after he received the case. As of March 1999, an IRS disciplinary review team was proposing changes to overcome problems with complaints processing. One of the units of its proposed Commissioner’s Review Group was to provide labor relations support for SES and other cases. This unit would have 11 employees. In addition, the Commissioner’s Review Group would have a contractor available to supplement it and support field investigations when management believed help was needed. As previously mentioned, the group would also be responsible for overcoming communication and coordination problems among complaint-processing systems. IRS did not comprehensively collect and analyze information on reprisals against IRS employee whistleblowers or on IRS retaliation against taxpayers. Some information was available on the number of IRS-related whistleblowing reprisal cases resolved by the two agencies responsible for considering such cases. For example, one of the agencies, OSC, received 63 IRS whistleblower reprisal matters over the fiscal years 1995 through 1997 and obtained action from IRS favorable to employees in 4 cases. Concerning allegations of IRS retaliation against taxpayers, we reported in 1996 and 1998 that IRS did not systematically capture information needed to identify, address, and prevent such taxpayer abuse. During this review, we also found limited and incomplete IRS information of past revenue agent retaliation against taxpayers. The IRS Restructuring and Reform Act of 1998 included several provisions related to abuse or retaliation against taxpayers, their representatives, or IRS employees. As of March 1999, the IRS disciplinary review team was proposing how data needed to fulfill the act’s requirements would be assembled. It is against the law to take a personnel action as a reprisal against a whistleblower. More specifically, an employee with personnel authority is not allowed to take, fail to take, or threaten a personnel action against an employee because the employee made a protected disclosure of information. Protected disclosures include disclosures that an employee reasonably believes show a violation of law, rule, or regulation; gross mismanagement; gross waste of funds; or an abuse of authority. If federal employees believe they have been subject to reprisal, they may pursue their complaint through the agency where they work. Alternatively, they may direct their complaint to OSC or MSPB. We could not determine the extent of reprisal against whistleblowers because IRS did not track information on whistleblower claims of reprisal. According to a knowledgeable IRS official, until recently, the ALERTS database did not have a code to capture information on retaliation associated with individuals, including reprisal against whistleblowers. However, OSC and MSPB provided the number of complaints filed with them. Under the Whistleblower Protection Act of 1989, OSC’s main role is to protect federal employees, especially whistleblowers, from prohibited personnel practices. In this role, OSC is to act in the interests of the employees by investigating their complaints of whistleblower reprisal and initiating appropriate actions. Whistleblowing employees may file a complaint with OSC for most personnel actions that are allegedly based on whistleblowing. As shown in table 7, between fiscal years 1995 and 1997, OSC received 63 whistleblowing reprisal matters related to IRS, compared to 2,092 for the federal government as a whole. However, OSC concluded that a much smaller number of IRS and governmentwide reprisal matters involved potentially valid statutory claims and therefore warranted more extensive investigation. OSC closed cases without further action for many reasons, including lack of jurisdiction over an agency or employee, absence of an element needed to establish a violation, and insufficient evidence. Since IRS had about 100,000 employees during this period, the ratio of matters received to the number of employees was less than a tenth of 1 percent. Similarly, although OSC received whistleblowing reprisal matters from throughout the federal government, the number of matters received was an extremely small percentage of the civilian employee federal workforce that numbered almost 2 million people. As table 7 further shows, at times both IRS and the federal government took “favorable actions” as a result of OSC investigations. In general, favorable actions are those that may directly benefit the complaining employee, punish the supervisor involved, or systematically prevent future questionable personnel actions. Agencies take these actions after receiving a request from OSC or with knowledge of a pending OSC investigation. The four favorable actions taken by IRS between fiscal years 1995 and 1997 entailed removing disciplinary letters from a personnel file, correcting an employee’s pay level, presenting a performance award, and promoting an employee retroactively and providing back pay. Employee complaints of whistleblowing reprisal may reach MSPB in two ways. First, if employees do not obtain relief through OSC, they may appeal to MSPB. Second, employees may appeal directly to MSPB without first going through OSC. They may do this for actions including adverse actions, performance-based removals or reductions in grade, denials of within-grade salary increases, reduction-in-force actions, and denials of restoration or reemployment rights. MSPB categorizes both types of appeals as “initial appeals.” MSPB administrative judges throughout the country decide initial appeals. The judges either dismiss the cases or decide them on their merits. Common reasons for dismissing cases are that they do not raise appealable matters within MSPB’s jurisdiction or that they are not filed within the required time limit. The parties to the dispute also may enter into a voluntary settlement, sometimes with assistance from the judge. Cases not dismissed or settled are adjudicated on their merits. Possible outcomes are that the agency action may be affirmed or reversed or the agency penalty may be mitigated or otherwise modified. A party dissatisfied with a case decision may file a “petition for review” by MSPB’s three-member board. The board may grant a petition if it determines that the initial decision was based on an erroneous interpretation of law or regulation or if new and material evidence became available. It may dismiss a petition that is untimely, withdrawn by the parties, or moot. Petitions may also be denied or settled. As with OSC, the number of whistleblowing reprisal decisions issued by MSPB was very small compared to the size of the IRS and federal workforces. As shown in table 8, for fiscal years 1995 through 1997, MSPB decided 45 initial appeals of whistleblowing reprisal allegations involving IRS. Similar to MSPB’s rulings involving the rest of the federal government, MSPB dismissed the majority of initial appeals involving IRS and denied the majority of petitions for review. However, settlements occurred in more than half of the initial appeals that were not dismissed, which could mean that employees were getting some relief. MSPB also occasionally remanded petitions for review, that is, sent them back for further consideration. MSPB ordered IRS corrective action (canceling an employee’s removal and mandating back pay) in one initial appeal case when due process measures unrelated to reprisal were not followed. To our knowledge, except for this case, MSPB did not reverse any IRS actions regarding alleged whistleblower reprisal matters over the 3-year period. For government initial appeals as a whole, MSPB ordered agency corrective action 11 times and otherwise reversed agency actions in 24 instances. Before the IRS Reform and Restructuring Act of 1998, IRS did not systematically collect information on retaliation against taxpayers. As we have previously reported, IRS information systems were designed for tracking disciplinary and investigative cases or correspondence and not for identifying, addressing, or preventing retaliation against taxpayers. The systems contained data elements that encompassed broad categories of employee misconduct, taxpayer problems, and legal action. Information in the systems related to allegations of taxpayer abuse was not easily distinguishable from information on allegations not involving taxpayers. Consequently, we found limited information on potential taxpayer abuse in IRS information systems, as shown in table 9. Recent changes in the law and IRS’ progress on information systems are intended to improve IRS’ ability to determine the extent to which its employees might have retaliated against taxpayers or employees for whistleblowing. Enacted in July 1998, the IRS Restructuring and Reform Act of 1998 included several provisions related to abuse or retaliation against taxpayers, their representatives, or IRS employees. “violations of the Internal Revenue Code of 1986, Department of Treasury regulations, or policies of the Internal Revenue Service (including the Internal Revenue Manual) for the purpose of retaliating against, or harassing, a taxpayer, taxpayer representative, or other employee of the Internal Revenue Service” …or ... “threatening to audit a taxpayer for the purpose of extracting personal gain or benefit.” The act also required the Treasury Inspector General for Tax Administration to include in its annual report summary information about any termination under section 1203 or about any termination that would have occurred had the Commissioner not determined there were mitigating factors. In March 1999, the disciplinary review team previously described was proposing that the Commissioner’s Review Group report these data to the Inspector General as well as broader data on the number of taxpayer complaints and the number of taxpayer abuse and employee misconduct allegations. The group would collect, consolidate, and validate data from existing systems and obtain supplemental information to fill gaps. However, according to the team, the group would have to qualify the initial reports to the Inspector General, waiting for data reliability to be established. With respect to allegations of improper zeroing out or reductions of recommended tax by IRS managers, we found no evidence to support the allegations in the eight specific cases referred to us by the IRS employees who testified at the hearings. On the other hand, IRS does not systematically collect data on the extent to which additional taxes recommended by IRS auditors are zeroed out or reduced without a basis in law or IRS procedure. While there are no data on improper reductions, there are data on IRS recommendations of additional tax that were not ultimately assessed. On the basis of such data, we recently reported that the majority of recommended additional taxes was not assessed. We attributed this result to a variety of factors, including the complexity of the tax code and the overreliance on taxes recommended as a measure of audit results. IRS’ process for doing audits of taxpayers’ returns and closing related disputes over additional recommended taxes has several steps. In an audit, an IRS auditor usually reviews the taxpayer’s books and records to determine compliance with tax laws and identify whether the proper amount of tax has been reported. To close an audit, the auditor may recommend increasing, decreasing, or not changing the tax reported. If a taxpayer disagrees with the recommendation at the close of the examination, the taxpayer may request an immediate review by the auditor’s supervisor. If the taxpayer agrees with the recommended additional tax or does not respond to IRS’ notices of examination results, IRS assesses the tax. With an assessment notice, IRS formally notifies the taxpayer that the specified amount of tax is owed and that interest and penalties may accrue if the tax is not paid by a certain date. The assessed amount, not the amount an auditor recommends at the end of the audit, establishes the taxpayer’s liability. If the taxpayer disagrees with an examination’s recommendation, the recommendation may be protested to IRS’ Office of Appeals or the dispute can be taken to court. The Office of Appeals settles most of these disputes, and the remainder are docketed for trial. Agreements made in settlements and court decisions determine the assessed part of the disputed tax. The issue of reductions in recommended tax was raised in the Committee’s hearing by IRS auditors who alleged that some supervisors “zeroed out” or reduced the results of audits—that is, the audits were closed with no or reduced recommended additional tax, without a basis in law or IRS procedure. The witnesses further alleged that the reasons for zeroing out included retaliating against auditors to diminish their chances for promotion, favoring former IRS employees in private practice, and exchanging zeroing out for bribes and gratuities from taxpayers. IRS has not systematically collected data on the extent to which additional taxes recommended by auditors have been zeroed out or reduced without a basis in law or IRS procedure. In particular, IRS had no data on supervisors’ improperly limiting auditors’ recommendations of additional tax before an audit was closed. However, IRS collects data on the amounts of recommended taxes that were not assessed and the number of examinations closed with no change in tax liability. One of our recent reports illustrates the lack of data on the extent to which supervisors improperly limit auditors’ recommendations of additional tax.We found that an estimated 94 percent of IRS workpapers lacked documentation that the group manager reviewed either the support for adjustments or the report communicating the adjustments to the taxpayer. IRS managers acknowledged that because of competing priorities, they could not thoroughly review workpapers for all audits. IRS officials commented that supervisory reviews were usually completed through other processes, such as reviewing time spent on an audit, conducting on- the-job visits, and discussing cases with auditors. We recommended that the IRS Commissioner require all audit supervisors to document their review of all workpapers to help ensure the quality of all examinations. In another recent report, we found that most additional taxes recommended by IRS auditors were not assessed. Table 10 shows taxes recommended by IRS auditors and the percentage of these amounts assessed for audits closed in fiscal years 1992 through 1997. During these years, at most, 41 percent of the additional taxes recommended during audits were assessed. Other IRS data showed that many examinations were concluded with no recommended additional tax. For example, according to IRS’ Fiscal Year 1997 Data Book, 24 percent of the corporate examinations completed during fiscal year 1997 were closed with no proposed tax change. Our previous work identified several factors that, in part, explained why recommended additional taxes were not assessed after audits were closed. Factors like these could also explain some actions by supervisors to zero out or reduce recommended tax amounts prior to audits being closed. However, IRS does not collect data on the extent to which these factors, or others, contribute to supervisors’ decisions prior to audits being closed. We reported that the complexity and vagueness of the tax code was one explanation for recommended taxes not being assessed after a corporate audit was closed. Because of the complexity and vagueness of the tax code, IRS revenue agents had to spend many audit hours to find the necessary evidence to clearly support any additional recommended taxes. In addition, differing interpretations in applying the tax code to underlying transactions increased the likelihood of tax disputes. Because corporate representatives usually prevailed in Appeals or the courts, additional taxes recommended were often not actually assessed. We also reported that aspects of the corporate audit process for large corporations also made it difficult for revenue agents to develop enough support to recommend tax changes that could survive a taxpayer appeal. For example, revenue agents worked alone on complex, large corporation audits with little direct assistance from district counsel or their group managers. In addition, when selecting returns for audit, the agents had little information on previously audited corporations or industry issues to serve as guideposts. Finally, the agents had difficulty obtaining relevant information from large corporations in a timely manner. IRS Internal Audit recently cited several factors that contributed to low productivity, as partially manifested by high no-change rates, in the Manhattan District Office. IRS acknowledged that in 1995, it took aggressive action to close old examinations. Also, audit group managers in Manhattan and two other districts did not have enough time to perform workload reviews to ensure quality examinations. Manhattan was below the IRS regional average in complying with IRS audit standards for such things as depth of examinations and workpaper support for conclusions. We also reported that relying too heavily on additional taxes recommended as a measure of audit results might create undesirable incentives for auditors. We found that audits of large corporations raised concerns that relying on recommended taxes as a performance indicator might encourage auditors to recommend taxes that would be unlikely to withstand taxpayer challenges and thus not be assessed. Supervisors on guard against this incentive, which might have also influenced them, might have been accused of improper zeroing out. In this connection, we recently reported that IRS examination and collection employees perceived that managers considered enforcement results when preparing annual performance evaluations. IRS is increasing its efforts to ensure that enforcement statistics are not used to evaluate its employees. In commenting on our report on enforcement statistics, the Commissioner stated that IRS was taking several actions to ensure that all employees comply with its policies on the proper use of enforcement statistics. These actions included redrafting applicable sections of the Internal Revenue Manual, establishing a panel responsible for answering all questions IRS received on enforcement statistics, and establishing an independent review panel to monitor compliance with restrictions on using enforcement statistics. In addition, in January 1999, IRS proposed establishing a balanced system of organizational measures focusing on quality and production measures, but not including tax enforcement results. Several of the individual allegations made by IRS employees that we reviewed involved the issue of improper zeroing out of additional taxes by IRS managers. The eight specific cases in question involved large organizations, and the issues generally related to complex financial transactions. We found no evidence to support the allegations that IRS managers’ decisions to zero out or reduce proposed additional taxes were improper. Instead, we found that the managers acted within their discretion and openly discussed relevant issues with involved IRS agents, technical advisors, and senior management. Ultimately, the decisions were approved by appropriate individuals and were documented in the files. Several of the cases demonstrated some of the concerns and issues we have raised in our prior work concerning audits of large corporations. For example, the complexity and vagueness of the tax code create legitimate differences in interpretation and administering the tax system creates a tension in seeking a proper balance between the tax administrator’s need for supporting documentation and the taxpayer’s burden in providing such information. IRS has acknowledged problems related to the EEO climate in its Milwaukee, WI, area offices and over the last few years has moved to address them. After a finding of discrimination in 1995 in the case of one employee, a new district director initiated an internal review, and, afterwards, IRS appointed an outside review team to study the EEO situation. The internal study made 53 recommendations in broad categories related to creating a supportive work culture, understanding issues, preparing employees for promotion, and examining the promotion process. The outside study found no discriminatory hiring or promotion practices, but it did make recommendations related to hiring and promotions, among other things. Problems with the EEO climate in IRS’ Midwest District Office, which is headquartered in Milwaukee, date back several years. In 1995, Treasury agreed with an Equal Employment Opportunity Commission administrative judge who found that a district employee was the victim of discrimination and retaliation. Also, Wisconsin congressional offices received EEO-related complaints from IRS employees, and internal and external groups were critical of district EEO matters. According to the District Director who arrived in early 1996, the district was perceived to run on “good-old-boy” connections. Also, the district, which was created in 1996 through the merger of three smaller districts, was facing possible layoffs, further contributing to tense labor-management relations. To try to better identify some of the underlying causes of the problems in IRS Milwaukee area offices, the District Director commissioned an IRS team in April 1996 to assess the EEO climate and make recommendations for corrective action. As part of its review, the team distributed a survey to all Milwaukee area district employees to gather EEO-related perceptions. On the basis of its review of the survey results and other data, in December 1997, the team reported that a lack of trust and goodwill pervaded the work environment. The survey revealed that people in all groups (e.g., males, females, nonminority whites, African Americans, and Hispanics) believed they were less likely than people in other groups to receive promotions, significant work assignments, training opportunities, and formal recognition or rewards. Specific problems cited in the report included little recent diversity training, a belief by certain minority employees that stereotypes negatively affected their treatment, difficulties in widely disseminating information, gaps in EEO communication, no formal mentoring program, and much dissatisfaction with how employees were selected for promotion. On the basis of its findings, the assessment team made 53 recommendations in 4 categories. The categories covered creating a supportive culture, creating a greater understanding of issues, preparing employees for promotion, and examining ways that employees were selected for promotion. In a 5th category—examining the representation of minorities in the district—the team made 21 more recommendations that were expected to be suspended pending an IRS analysis of the ramifications of certain court cases. The District Director who commissioned the climate assessment report praised it and the process that produced it. During his tenure, many actions were taken to address the district’s EEO problems. For example, (1) policy statements were issued tolerating no discriminatory behavior, (2) minority representation in the Director’s and EEO offices was increased, (3) the EEO office was given more privacy, (4) baselines were set to measure the impact of any improved hiring or promotion policies, (5) minorities were promoted to positions of authority, and (6) training was provided. Goals were also set to open communications with employees, employee and community groups, and the media; treat individual performance cases fairly; and not debate emotionally charged personnel issues in the press. In spite of the climate assessment team’s efforts and the various changes made or planned, the district’s EEO problems persisted. Consequently, IRS and certain members of the Wisconsin congressional delegation agreed that another team should independently review the situation. To try to preserve its independence, the team purposefully had no representation from IRS. Also for this reason, it solicited no IRS comments on its draft report. The team interviewed more than 100 people and examined over 130 records and files, although it did not scientifically select interviewees or broadly survey all district employees. Team members told us they tried to ensure broad coverage by talking to many people and to all sides of general issues. Moreover, they relied on the climate assessment survey to summarize perceptions. They also, however, relied extensively on anecdotal information without determining its objectivity or accuracy. In August 1998, the team reported, among other things, that (1) many employees had no confidence in the EEO process and feared retaliation if they filed complaints or participated in a way considered adversarial to management, (2) separating EEO functions into outreach and traditional EEO/counseling components was not working effectively, (3) the counseling program was in disarray, and (4) confusion existed over the role of Treasury’s Regional Complaint Center in the formal EEO complaint process. Also, although anecdotes collected by the team did not support a sweeping indictment of Milwaukee IRS management practices, the report concluded that, intentionally or not, some practices perpetuated a work environment that was historically insensitive to the concerns of female and minority employees. On the basis of its review, the team made recommendations in different areas. For instance, many recommendations dealt with the team’s findings related to the district’s EEO process for resolving issues in a precomplaint stage and its relationship to Treasury’s formal complaint process. The team also made recommendations relating to hiring and promotions in spite of finding no discriminatory pattern or practice in promoting or hiring minorities or women. The report noted that African Americans in IRS’ Milwaukee and Waukesha, WI, offices appeared underrepresented when compared to the Milwaukee civilian labor force. Although district managers and representatives of employee groups disagreed with many of the issues and assertions in the report, there was general agreement with many of the recommendations. For instance, the head of the diversity office at the time of the study informed us that he agreed with the substance of, had actually taken action related to, or would favor forwarding to Treasury many of the report’s recommendations. After the report was released, IRS initiated several significant actions to address problems identified. Chief among these was appointing a new District Director who arrived in the district in mid-November 1998 with a stated commitment to overcome past problems. In that regard, she described to us her intent to open communication channels and deal with disrespect, nastiness, and mean-spiritedness at all levels. She emphasized her themes of communication, responsibility, and accountability and told us that on her second day in the district she discussed these themes at an off-site meeting with top managers and union, EEO, and diversity officials. The new District Director also expressed to us her commitment to work with various interest groups. In addition, she combined the district’s EEO and diversity functions, made EEO positions permanent as opposed to rotational, and invited a union representative to be present for interviews for a new EEO officer. The new District Director stated that these actions were on the right track, but because of the long and contentious history of EEO problems in the district, improvements and success will take time. She also noted that better communication and cooperation among IRS and the various internal and external stakeholders will be extremely important in dealing with the district’s long-standing problems. In commenting on a draft of this report, the Commissioner of Internal Revenue described IRS actions on the issues we noted. For instance, he shared our concern that IRS needed to improve how it managed executive misconduct cases. He noted that the recently created Commissioner’s Complaint Processing and Analysis Group, proposed as the Commissioner’s Review Group, will coordinate IRS’ efforts to improve complaint information, especially relating to alleged reprisal against whistleblowers, so that complaints will be promptly and fairly resolved. IRS will also share more information with employees and the public on responses to reprisals and other complaints to highlight a message that all employees will be held accountable for their actions. The full text of the Commissioner’s comments is reprinted in appendix IV. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to Senator Daniel Patrick Moynihan, the Ranking Minority Member of the Senate Committee on Finance; the Honorable Charles O. Rossotti, Commissioner of Internal Revenue; other interested congressional committees; and other interested parties. This work was done under the direction of Joseph E. Jozefczyk, Assistant Director for Tax Policy and Administration Issues. Other major contributors are listed in appendix V. If you have questions, you may contact me on (202) 512-9110. We organized our work to bring together information bearing on the five issues contained in your May 21, 1998, request letter. Accordingly, our objectives were to (1) determine if senior Internal Revenue Service (IRS) managers received the same level of disciplinary action as line staff; (2) determine to what extent, if any, the IRS Deputy Commissioner might have delayed action on substantiated cases of employee misconduct until senior managers were eligible to retire; (3) ascertain the extent to which IRS employees might have retaliated against whistleblowers and against taxpayers or their representatives who were perceived as uncooperative; (4) determine the extent to which IRS employees might have zeroed out or reduced the additional tax recommended from examinations for reasons not related to the merits of the examinations; and (5) describe equal employment opportunity (EEO) issues in IRS offices in the Milwaukee metropolitan area. Our scope and methodology related to each of these objectives follow. To compare disciplinary experiences of Senior Executive Service (SES) and lower-level employees, we matched data accumulated by sampling senior executives’ misconduct cases against data for lower-level employees extracted from IRS’ broader disciplinary database, the Automated Labor and Employee Relations Tracking System (ALERTS). We compiled general statistics on how long senior executive cases took by collecting information from every second nontax SES case file in IRS’ Office of Executive Support (OES) that was active sometime between January 1, 1996, and June 30, 1998. Our sample included 70 cases. For each case in our sample, we extracted and recorded data from the relevant case file. These data included issues involved, processing dates, information on whether allegations were substantiated by investigators, disciplinary actions proposed and adopted, and information related to retirement. For lower-level employees, that is, general schedule (GS) employees, we obtained selected parts of the ALERTS database from IRS. We ran our statistical analyses on ALERTS cases that IRS’ Office of Labor Relations received between January 1, 1996, and June 30, 1998, and on cases that were closed within that period. More specifically, we focused on administrative and IRS Inspection Service cases within ALERTS because they were the categories in which conduct matters were found. Although we did not audit ALERTS, IRS officials told us that this data system had over the years had flaws, but they also told us it was better than it used to be. Because ALERTS was the only source of information available on lower-level disciplinary actions, we used it to the extent that it had information comparable to what we collected on senior-level cases. We also reviewed recent internal IRS and independent studies of IRS’ disciplinary systems and interviewed IRS officials about their plans for revamping the systems. One IRS study we reviewed used the lower-level disciplinary database to assess the effect of IRS’ using a guide to determine appropriate disciplinary action. We also became familiar with the Douglas Factors, shown in appendix II, governing disciplinary actions imposed and asked IRS officials about the differences, if any, they perceived between SES and lower-level cases. We examined the question of alleged delays in dealing with cases of alleged misconduct by senior executives by taking several steps. First, we studied in depth the five specific cases mentioned in the April 1998 hearings. This involved examining investigative and personnel files as well as files maintained by OES. In addition, we interviewed various IRS officials, including the Deputy Commissioner, about these cases. In addition, we used the 70-case sample of senior executive cases previously described to obtain more broad-based information about any possible delays. Although most of our analyses were based on this sample, to learn more about the cases that took the most time, we also examined every case file IRS could find that appeared on lists of cases awaiting action at OES for at least 90 days during the January 1, 1996, through June 30, 1998, period we were studying. We also examined cases that appeared on logs that IRS kept so we could better ensure we were not overlooking cases we did not otherwise encounter for the period. In all, we examined the 70 cases in our sample plus 43 more cases on lists and logs for a total of 113 cases. Because some individuals were involved in more than 1 case, the 113 cases we analyzed covered 83 senior executives. We extracted the same type of information from each of the case files that we extracted from the sampled case files. Examining lists, logs, and files allowed us to see if recordkeeping practices might have contributed to any delays. To examine the relationship between case-processing and retirement dates, we analyzed where in the case-processing sequence the retirement dates provided us by OES fell. In instances in which OES was also able to readily provide retirement eligibility dates, we considered them in examining processing timeliness as well. To tabulate the number of whistleblowing reprisal cases, we obtained information from the Office of Special Counsel (OSC) and the Merit Systems Protection Board (MSPB). We did this for the number of cases involving IRS employees, and for contextual purposes, for cases from throughout the federal government. For governmentwide data, we used either information already published or data generated specifically for us. For IRS data, the agencies did special searches of their databases. We did not audit the OSC or MSPB data systems. Because in the MSPB data system not all IRS cases could be isolated, we examined actual case rulings that MSPB gathered for us or that we located on the Internet, looking for Department of the Treasury cases that were really IRS cases. For Treasury cases for which MSPB was not able to give us timely information and information was not on the Internet, we asked IRS to identify whether they involved IRS employees. In looking for information on IRS employees who might have retaliated against taxpayers or their representatives who were perceived to be uncooperative, we studied our reports on taxpayer abuse. In addition, we interviewed IRS officials and investigated entries under specific codes in various databases to see if relevant issues appeared. Finally, we discussed with IRS officials changes to the information systems that might be coming in the future. Concerning information on the improper zeroing out or reduction of additional tax recommended, we studied our and Inspection Service reports dealing with examination issues related to audit results. We specifically considered our and IRS information on the extent to which IRS audit recommendations were actually assessed and the factors that could explain the results. To describe EEO issues in the Milwaukee area, we examined the report of an outside team studying the program and the documents that the team accumulated in doing its work, including an IRS internal EEO climate assessment study. We also interviewed key study participants and affected parties in Washington, D.C., and Milwaukee to better understand what the EEO climate in the area was, how the study report was done, and what had happened since the report was finished. In addition to addressing the concerns of the Senate Committee on Finance, we planned our work to respond to a mandate in the Conference Report on the IRS Restructuring and Reform Act of 1998. The conferees intended for us to review the study team report. We did our work in Washington, D.C., and Milwaukee between June 1998 and March 1999 in accordance with generally accepted government auditing standards. The Douglas Factors are as follows: The nature and seriousness of the offense, and its relation to the employee’s duties, position, and responsibilities, including whether the offense was intentional or technical or inadvertent, or was committed maliciously or for gain, or was frequently repeated; the employee’s job level and type of employment, including supervisory or fiduciary role, contacts with the public, and prominence of the position; the employee’s past disciplinary record; the employee’s past work record, including length of service, performance on the job, ability to get along with fellow workers, and dependability; the effect of the offense upon the employee’s ability to perform at a satisfactory level and its effect upon supervisors’ confidence in the employee’s ability to perform assigned duties; consistency of penalty with those imposed upon other employees for the same or similar offenses; consistency of the penalty with the applicable agency table of penalties; the notoriety of the offense or its impact on the reputation of the agency; the clarity with which the employee was on notice of any rules that were violated in committing the offense, or had been warned about the conduct in question; potential for employee’s rehabilitation; mitigating circumstances surrounding the offense such as unusual job tensions, personality problems, mental impairment, harassment, or bad faith, malice or provocation on the part of others involved in the matter; and the adequacy and effectiveness of alternative sanctions to deter such conduct in the future by the employee or others. This appendix summarizes information about the five senior-level misconduct allegations cited in the April 1998 Senate Finance Committee hearings. The summaries include information about when the executives were eligible to retire and about whether their eligibility dates might have related to how their cases were processed. We refer to the executives in these five cases as Executives A through E. An IRS employee filed a complaint that Executive A and two other IRS employees violated IRS ethics rules. The IRS employee also alleged that Executive A and the two other employees retaliated against her for reporting the ethics violations. The alleged violations included manipulating a rating system, giving an improper award, falsifying records, and not reporting time card fraud, although Executive A was only alleged to be involved in the last violation. Treasury’s Office of Inspector General (OIG) did not find that Executive A was culpable for ethics violations but found that the other two employees were culpable. IRS attorneys reviewing the case concluded that the information in the OIG report did not demonstrate misconduct on Executive A’s part. Executive A was not eligible for retirement when the allegation was made or when the OIG investigation was closed. This case started when the OIG received an anonymous allegation that Executive B abused travel authority. IRS officials reviewed the allegation and found that Executive B had authorized unjustified travel expenditures. Local management then counseled Executive B that all expenditures needed to be authorized according to IRS procedures. This counseling was confirmed in writing. However, contrary to IRS policy, the counseling took place before the Deputy Commissioner concurred with the proposed case resolution. Executive B was already eligible for retirement at the time the allegation was made. The OIG received an anonymous complaint that Executive C was abusing official travel. The OIG report concluded that Executive C made personal use of some travel benefits earned on government travel. The offices considering the case disagreed among themselves over the facts, the adequacy of the investigation, and the steps to be taken next. The Director of IRS’ Human Resources Division, which was involved in executive misconduct cases earlier in the 1990s, advocated a reprimand, but the recommending official thought that significant circumstances mitigated any disciplinary action. OES prepared a statement of differences and recommended a reprimand. A few months later, the recommending official, finding no abuse and unclear IRS guidance in the area, recommended closing the case without action but cautioning the executive. The next month, the OES official who previously recommended a reprimand sent the case to the Deputy Commissioner, this time agreeing with the recommending official’s position. A few months after that, the OIG reminded the Deputy Commissioner of the previous year’s report and requested appropriate action. Later, OIG officials told OES that they disagreed with OES’ recommendation to close the case without action. Finally, OES wrote the Deputy Commissioner reaffirming the recommendation for closure without action but with cautioning. The Deputy Commissioner counseled the executive 5-½ years after the case began and 18 months after receiving the case. When we asked the Deputy Commissioner why the final stage of case processing took so long, he had no explanation. Executive C was not eligible for retirement at the time the allegation was made or at the time he was counseled. The IRS sexual harassment hotline received an anonymous allegation that Executive D might have harassed a staff member. During the Inspection Service investigation, Executive D refused to answer a question he believed was irrelevant. In its report, the Inspection Service summarized the facts of the investigation and did not conclude whether there was a violation of IRS ethical standards. OES and the recommending official disagreed in their analyses of the report and their resulting recommendations. OES concluded that a 15-day suspension was warranted for the refusal to answer a question even though IRS counsel was not sure a violation really occurred. OES also raised the possibility of reassigning Executive D. The recommending official believed that, in this case, refusal to answer a question did not violate ethics rules, but that counseling was warranted. About 39 months after OES prepared a statement of differences, an Inspection Service case-tracking entry indicated that IRS management planned no action on the case. The next year, OES closed the case “administratively” due to the employee’s retirement. The Deputy Commissioner told us that, several years before its administrative close, the case was “de facto closed” with Executive D’s transfer. He stated that the transfer was the appropriate disciplinary action because Executive D was too familiar with local employees. OES did not close the case until the individual retired several years after the transfer. It did not realize that the Deputy Commissioner considered it closed earlier. Also, IRS officials we asked could not find the case file for at least a few months. Executive D was eligible for retirement at the time the allegation was made. The Inspection Service began an investigation after an anonymous caller reported to Internal Security that Executive E abused her authority. More than a year later, the investigation confirmed the allegation, and the Director of the Human Resources Division recommended that a letter of reprimand be issued. More than 4 years after that, OES recommended sending a letter of reprimand or a letter confirming counseling. The Deputy Commissioner sent Executive E a letter of counseling 5-½ years after the original complaint and more than 4 years after receiving the case. The Deputy Commissioner explained to us that he had not been comfortable with the allegations’ correctness, but that he eventually agreed that the allegations had some merit. He added that the delay in closing the case occurred because he allowed the case to be lost in the system. He did not, he said, cover up for Executive E. Specifically, he stated that reduced OES staffing and a poor information system were contributing factors to the case being delayed without a disposition. Executive E was not eligible for retirement at the time the allegation was made or at the time the counseling letter was sent. Lawrence M. Korb, Evaluator-in-Charge, Tax Policy and Administration Leon H. Green, Senior Evaluator Deborah A. Knorr, Senior Evaluator Anthony P. Lofaro, Senior Evaluator Jacqueline M. Nowicki, Evaluator Patricia H. McGuire, Assistant Director MacDonald R. Phillips, Senior Computer Specialist James J. Ungvarsky, Senior Computer Specialist Eric B. Hall, Computer Technician The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on alleged misconduct by Internal Revenue Service (IRS) employees in their treatment of other IRS employees and taxpayers, focusing on: (1) the specific allegations made at the Senate Committee on Finance hearings; and (2) any underlying systemic or programmatic problems that need to be resolved to protect the rights of taxpayers and IRS employees. GAO noted that: (1) available data showed significant differences between Senior Executive Service and line staff disciplinary cases in terms of dispositions and processing times; (2) IRS found that actions taken against lower-level employees more closely conformed to its established table of penalties than actions taken against higher-graded employees; (3) regarding the allegation that the Deputy Commissioner delayed action on senior manager misconduct cases until the managers were eligible to retire, GAO focused on actual retirements and did not reach general conclusions about eligibility to retire; (4) GAO found no cases in which an individual who was ineligible to retire when an allegation was filed, retired while the case was pending with the Deputy Commissioner; (5) GAO could not determine the extent of reprisal against whistleblowers because IRS did not track whistleblowing reprisal cases; (6) regarding allegations of IRS retaliation against taxpayers, GAO previously reported that IRS information systems were not designed to identify, address, and prevent such taxpayer abuse; (7) with respect to allegations of improper zeroing out or reductions of recommended taxes by IRS managers, GAO found no evidence to support the allegations in the eight specific cases referred to GAO by the IRS employees who testified at the hearings; (8) on the other hand, IRS did not systematically collect data on how much additional taxes recommended by auditors were zeroed out or reduced by IRS employees without a basis in law or IRS procedure; (9) IRS has acknowledged equal employment opportunity-related problems, including problems in hiring and promotion, in its Midwest District Office and has begun addressing them; and (10) IRS' lack of adequate information systems and documentation in the areas of employee discipline, retaliation against whistleblowers and taxpayers, and zeroing out of recommended taxes prevented GAO from doing a more comprehensive analysis of these issues.
In recent years, a number of factors have led to growing concern about the protection of privacy when personally identifiable information is collected and maintained by the federal government. Recent data breaches of personal information at government agencies, such as the data breach at the Department of Veterans Affairs, which exposed the personal information of 26.5 million veterans and active duty members of the military in May 2006, have raised concerns about identity theft. In addition, increasingly sophisticated analytical techniques employed by federal agencies, such as data mining, also raise concerns about how personally identifiable information is used and what controls are placed on its use. Concerns such as these have focused attention on the structures agencies have instituted to ensure privacy protections are in place. The major requirements for privacy protection by federal agencies come from two laws, the Privacy Act of 1974 and the E-Gov Act of 2002. The Privacy Act places limitations on agencies’ collection, disclosure, and use of personal information maintained in systems of records. The act describes a “record” as any item, collection, or grouping of information about an individual that is maintained by an agency and contains his or her name or another personal identifier. It also defines “system of records” as a group of records under the control of any agency from which information is retrieved by the name of the individual or by an individual identifier. The Privacy Act requires that when agencies maintain a system of records, they must notify the public by a system-of-records notice: that is, a notice in the Federal Register identifying, among other things, the type of data collected, the types of individuals about whom information is collected, the intended “routine” use of the data, and procedures that individuals can use to review and correct personal information. The act also requires agencies to define and limit their use of covered personal information. In addition, the act requires that to the greatest extent practicable, personal information should be collected directly from the subject individual when it may affect an individual’s rights or benefits under a federal program. The E-Gov Act of 2002 also assigns agencies significant responsibilities relating to privacy. The E-Gov Act strives to enhance protection for personal information in government information systems or information collections by requiring that agencies conduct PIAs. A PIA is an analysis of how personal information is collected, stored, shared, and managed in a federal system. Furthermore, according to OMB guidance, a PIA is an analysis of how information is handled. Specifically, a PIA is to (1) ensure that handling conforms to applicable legal, regulatory, and policy requirements regarding privacy; (2) determine the risks and effects of collecting, maintaining, and disseminating information in identifiable form in an electronic information system; and (3) examine and evaluate protections and alternative processes for handling information to mitigate potential privacy risks. Agencies must conduct PIAs (1) before developing or procuring information technology that collects, maintains, or disseminates information that is in a personally identifiable form or (2) before initiating any new data collections involving personal information that will be collected, maintained, or disseminated using information technology if the same questions are asked of 10 or more people. To the extent that PIAs are made publicly available, they provide explanations to the public about such things as the information that will be collected, why it is being collected, how it is to be used, and how the system and data will be maintained and protected. OMB is tasked with providing guidance to agencies on how to implement the provisions of these two acts and has done so, beginning with guidance on the Privacy Act, issued in 1975. The guidance provides explanations for the various provisions of the law as well as detailed instructions on how to comply. OMB’s guidance on implementing the privacy provisions of the E- Gov Act of 2002 identifies circumstances under which agencies must conduct PIAs and explains how to conduct them. We have previously reported on the role of senior privacy officials in the federal government. In 2006, we testified that the elevation of privacy officers to senior positions reflected the growing demands that these individuals faced in addressing privacy challenges on a day-to-day basis. The challenges we identified included ensuring compliance with relevant privacy laws, such as the Privacy Act and the E-Gov Act, and controlling the collection and use of personal information obtained from commercial sources. Additionally, in 2007 we reported that the DHS Privacy Office had made significant progress in carrying out its statutory responsibilities under the Homeland Security Act and its related role in ensuring E-Gov Act compliance, but noted that more work remained to be accomplished. We recommended that DHS designate privacy officers at key DHS components, implement a department wide process for reviewing Privacy Act notices, establish a schedule for the timely issuance of privacy reports, and ensure that the Privacy Office’s annual reports to Congress contain a specific discussion of complaints of privacy violations. In response, DHS included a discussion of privacy complaints in its most recent annual report; however, the other recommendations have not yet been implemented. Laws and guidance set a variety of requirements for senior privacy officials at federal agencies. For example, agencies have had a long standing requirement under the Paperwork Reduction Act to assign agency CIOs overall responsibility for privacy policy and compliance with the Privacy Act. In recent years, additional laws have been enacted that also address the roles and responsibilities of senior officials with regard to privacy. Despite much variation, all of these laws require agencies to assign overall responsibility for privacy protection and compliance to a senior agency official. In addition, OMB guidance has directed agencies to designate senior officials with overall responsibility for privacy. These laws and guidance set specific privacy responsibilities for these agency officials. These responsibilities can be grouped into six broad categories: (1) conducting PIAs; (2) Privacy Act compliance; (3) reviewing and evaluating the privacy implications of agency policies, regulations, and initiatives; (4) producing reports on the status of privacy protections; (5) ensuring that redress procedures are in place; and (6) ensuring that employees and contractors receive appropriate training. The laws and guidance vary in how they frame requirements in these categories and which agencies must adhere to them. Numerous laws assign privacy responsibility to senior agency officials. The earliest of these laws is the Paperwork Reduction Act of 1980, which, as amended, directs agency heads to assign a CIO with responsibility for carrying out the agency’s information resources management activities to improve agency productivity, efficiency, and effectiveness. The act directs agency CIOs to undertake responsibility for implementing and enforcing applicable privacy policies, procedures, standards, and guidelines, and to assume responsibility and accountability for compliance with and coordinated management of the Privacy Act of 1974 and related information management laws. As concerns about privacy have increased in recent years, Congress has enacted additional laws that include provisions addressing the roles and responsibilities of senior officials with regard to privacy. Despite variations, a common thread among these laws, as well as relevant OMB guidance, is that they all require agencies to assign overall responsibility for privacy protection and compliance to a senior agency official. Relevant laws include the following: The Homeland Security Act of 2002 directed the secretary of DHS to designate a senior official with primary responsibility for privacy policy. The Intelligence Reform and Terrorism Prevention Act of 2004 required the Director of National Intelligence to appoint a Civil Liberties Protection Officer and assigned this individual specific privacy responsibilities. The Violence Against Women and Department of Justice Reauthorization Act of 2005 instructed the Attorney General to designate a senior official with primary responsibility for privacy policy. The Transportation, Treasury, Independent Agencies and General Government Appropriations Act of 2005 directed each agency whose appropriations were provided by the act, including the Departments of Transportation and Treasury, to designate a CPO with primary responsibility for privacy and data protection policy. The Implementing Recommendations of the 9/11 Commission Act of 2007 instructed the heads of Defense, DHS, Justice, Treasury, Health and Human Services, and State, as well as the Office of the Director of National Intelligence and the Central Intelligence Agency to designate no less than one senior officer to serve as a privacy and civil liberties officer. Specific privacy provisions of these laws are summarized in appendix II. A number of OMB memorandums have also addressed the roles and responsibilities of senior privacy officials. In 1999, OMB required agencies to designate a senior official to assume primary responsibility for privacy policy. OMB later reiterated this requirement in its guidance on compliance with the E-Gov Act, in which it directed agency heads to designate an appropriate senior official with responsibility for the coordination and implementation of OMB Web and privacy policy and to serve as the agency’s principal contact for privacy policies. Most recently, in 2005, OMB directed agencies to designate an SAOP with agency wide responsibility for information privacy issues and with responsibility for specific privacy functions, including ensuring agency compliance with all federal privacy laws, playing a central policy-making role in the development of policy proposals that implicate privacy issues, and ensuring that contractors and employees are provided with adequate privacy training. Beginning in 2005, OMB has also issued guidance significantly enhancing longstanding requirements for agencies to report on their compliance with privacy laws. OMB’s 2005 guidance directed agencies to add a new section addressing privacy to their annual reports under the Federal Information Security Management Act (FISMA). SAOPs were assigned responsibility for completion of this section, in which they were to report on such things as agency policies and procedures for the conduct of PIAs, agency policies for ensuring adequate privacy training, as well as their own involvement in agency regulatory and policy decisions. In 2006, OMB issued further guidance requiring agencies to include as part of their FISMA reports a section addressing measures for protecting personally identifiable information. This guidance also required that agencies provide OMB with quarterly privacy updates and report all incidents relating to the loss of or unauthorized access to personally identifiable information. Most recently, OMB directed agencies in 2007 to include in their FISMA reports additional items, such as their breach notification policies, plans to eliminate unnecessary use of Social Security numbers, and plans for reviewing and reducing their holdings of personally identifiable information. These laws and guidance set a variety of requirements for senior officials to carry out specific privacy responsibilities. These responsibilities can be grouped into the following six key functions: Conduct of PIAs: A PIA is an analysis of how personal information is collected, stored, shared, and managed in a federal system, and is required before developing or procuring information technology that collects, maintains, or disseminates information that is in a personally identifiable form. Several laws assign privacy officials at covered agencies responsibilities that are met in part by performing PIAs on systems that collect, process, or store personally identifiable information. This includes the requirements for several agencies to ensure that “technologies sustain and do not erode privacy protections.” Furthermore, OMB guidance requires agency SAOPs to ensure compliance with federal laws, regulations, and policies relating to information privacy, such as the E-Gov Act, which spells out agency PIA requirements. Privacy Act compliance: As previously discussed, the Privacy Act sets a variety of requirements for all federal agencies regarding privacy protection. For example, the act requires that when agencies establish or make changes to a system of records, they must notify the public by a notice in the Federal Register , identifying, among other things, the type of data collected, the types of individuals about whom information is collected, the intended “routine” use of the data, and procedures that individuals can use to review and correct personal information. Several other laws explicitly direct agency privacy officials to ensure that the personal information contained in their Privacy Act systems of records is handled in compliance with fair information practices as set out in the act. Further, OMB guidance assigns agency SAOPs with responsibility for ensuring Privacy Act compliance. Policy consultation: Relevant laws direct senior privacy officials to actively participate in the development and evaluation of privacy-sensitive agency policy decisions. Several specifically task the SAOP with evaluating legislative and regulatory proposals or periodically reviewing agency actions affecting privacy. As agencies develop new policies, senior officials responsible for privacy issues play a key role in identifying and mitigating potential privacy risks prior to finalizing a particular policy decision. Moreover, OMB directed agency SAOPs to undertake a central role in the development of policy proposals that implicate privacy issues. Privacy reporting: Agency senior privacy officials are often required to prepare periodic reports to ensure transparency about their activities and compliance with the law. Many laws reviewed required agencies to produce periodic privacy reports to agency stakeholders and Congress. OMB also requires agency SAOPs to report on their privacy activities as part of their annual FISMA reports, including such measures as their total numbers of systems of records, the number of written privacy complaints they have received, and whether a senior official has responsibility for all privacy-related activities. Redress: With regard to federal agencies, the term “redress” generally refers to an agency’s complaint resolution process, whereby individuals may seek resolution of their concerns about an agency action. Specifically, in the privacy context, redress refers to processes for handling privacy inquiries and complaints as well as for allowing citizens who believe that agencies are storing and using incorrect information about them to gain access to and correct that information. The Privacy Act requires that all agencies, with certain exceptions, allow individuals access to their records and the ability to have inaccurate information corrected. Several recent laws also direct senior privacy officials at specific agencies to provide redress by ensuring that they have adequate procedures for investigating and addressing privacy complaints by individuals. Several laws also provide for attention to privacy in a broader context of civil liberties protection. Privacy training: Privacy training is critical to ensuring that agency employees and contractor personnel follow appropriate procedures and take proper precautions when handling personally identifiable information. For example, The Transportation, Treasury, Independent Agencies and General Appropriations Act of 2005 requires senior privacy officials at covered agencies to ensure that employees have adequate privacy training. OMB also requires agency SAOPs to ensure that employees and contractors receive privacy training. In addition to performing key privacy functions, requirements in laws include responsibilities to ensure adequate security safeguards to protect against unauthorized access, use, disclosure, and destruction of sensitive personal information. Generally, this is provided through agency information security programs established under FISMA, and overseen by agency CIOs and chief information security officers (CISO). Moreover, OMB has issued guidance instructing agency heads to establish appropriate administrative, technical, and physical safeguards to ensure the security and confidentiality of records. Figure 1 shows the extent to which laws have requirements that specifically address each privacy function and to which agencies these requirements apply. Agencies have varying organizational structures to address privacy responsibilities. For example, of the 12 agencies we reviewed, 2 had statutorily designated CPOs who also served as SAOPs, 5 designated their agency CIOs as their senior officials, and the others designated a variety of other officials, such as the general counsel or assistant secretary for management. Further, not all of the agencies we reviewed had given their designated senior officials full oversight over all privacy-related functions. While 6 agencies had these officials overseeing all key privacy functions, 6 others relied on other organizational units not overseen by the designated senior official to perform certain key privacy functions. The fragmented way in which privacy functions have been assigned to organizational units in these agencies is at least partly the result of evolving requirements in law and guidance. As requirements have evolved, organizational responsibilities have been established incrementally to meet them. However, without oversight and involvement in all key privacy functions, SAOPs may be unable to effectively serve as agency central focal points for privacy. Agencies have taken varied approaches to designating senior agency officials with privacy responsibilities. Two of the 12 agencies we reviewed had separate CPOs that were also designated as the senior officials for privacy. Five agencies assigned their agency CIOs as SAOPs, and 1 agency assigned its CISO. Lastly, 4 agencies assigned another high-level official, such as a general counsel or assistant secretary for management, as the SAOP. In addition to varying in how they designated senior officials for privacy, agencies also varied in the way they assigned privacy responsibilities to organizational units. Four of the 12 agencies we reviewed (Transportation, DHS, State, and U.S. Agency for International Development) had one organization primarily responsible for all of the six key privacy functions outlined in the previous section. The remaining 8 agencies (Social Security Administration, Veterans Affairs, Defense, Commerce, Labor, Justice, Treasury, and Health and Human Services) relied on more than one organizational unit to perform privacy functions. Figure 2 summarizes the organizational structures in place at agencies to address the six key privacy functions, including the specific organizational units responsible for carrying out each of the key privacy functions. Six of the agencies (DHS, State, Social Security Administration, Transportation, U.S. Agency for International Development, and Veterans Affairs) established privacy structures in which the SAOP oversaw all key privacy functions. For example, DHS’s Privacy Office performed these functions under the direction of the CPO, who was also the department’s SAOP. Similarly, U.S. Agency for International Development’s CISO (also the SAOP) oversaw the agency’s privacy office, which was responsible for all key functions. While more than one organizational unit carried out privacy functions in two cases (Veterans Affairs and the Social Security Administration), all such units were overseen by the senior agency official for privacy. However, six other agencies (Commerce, Health and Human Services, Labor, Transportation, Defense, and Treasury) had privacy management structures in which the SAOP did not oversee all key privacy functions. For two agencies—Justice and Treasury—the SAOP had oversight over all key functions except for redress, which was handled by individual component organizations. For the other four agencies, key functions were divided among two or more organizations, and the senior privacy official did not have oversight of all of them. For example, key privacy functions at Labor were being performed not only by the office of the CIO (who is also the SAOP) but also by the Office of the Solicitor, who is independent of the CIO. Likewise, the senior official at Commerce was responsible for overseeing conduct of PIAs, policy consultation, and privacy training, while a separate Privacy Act Officer was responsible for Privacy Act compliance. Without full oversight of key privacy functions, SAOPs may be limited in their ability to ensure that privacy protections are administered consistently across the organization. The fragmented way in which privacy functions have been assigned to organizational units in several agencies is at least partly the result of evolving requirements in law and guidance. As requirements have evolved, organizational responsibilities have been established incrementally to meet them. For example, although the Privacy Act does not specify organizational structures for carrying out its provisions, many agencies established Privacy Act officers to address the requirements of that act and have had such positions in place for many years. In some cases, agencies designated their general counsels to be in charge of ensuring that the Privacy Act’s requirements were met. More recently, the responsibility to conduct PIAs under the E-Gov Act frequently has been given to another office, such as the Office of the CIO, because the E-Gov Act’s requirements apply to information technology, which is generally the purview of the CIO. If an SAOP was designated in such agencies without reassigning these responsibilities, that official may not have oversight and involvement in all key privacy activities. Uneven implementation of the Paperwork Reduction Act also may have contributed to fragmentation of privacy functions. As previously discussed, the Paperwork Reduction Act requires agency CIOs to take responsibility for privacy policy and compliance with the Privacy Act, and thus agencies could ensure they are in compliance with the Paperwork Reduction Act by designating their CIOs as SAOPs. However, 7 out of the 12 agencies we reviewed did not designate their CIOs as SAOPs. Further, if CIOs were designated as agency SAOPs but did not have responsibility for compliance with the Privacy Act—as was the case at Commerce, Labor, and Health and Human Services—the SAOPs would be left without full oversight of key privacy functions. Agencies that have more than one internal organization carrying out privacy functions run the risk that those organizations may not always provide the same protections for personal information if they are not overseen by a central authority. Thus, unless steps are taken to ensure that key privacy functions are under the oversight of the SAOP, agencies may be limited in their ability to ensure that information privacy protections are implemented consistently across their organizations. While agencies have had the responsibility for many years to establish management structures to ensure coordinated implementation of privacy policy and compliance with the Privacy Act, recent laws and guidance have significantly changed requirements for privacy oversight and management. These laws and guidance vary in scope and specificity, but they all require the designation of a senior agency official with overall responsibility for privacy protection and compliance with statutory requirements. In adopting varied assignments for key privacy functions, not all agencies gave their SAOPs responsibility for all key privacy functions. As a result, agencies may not be implementing privacy protections consistently. While the particulars of privacy management may vary according to the size of the agency and the sensitivity of its mission, agencies generally would likely benefit from having SAOPs that serve as central focal points for privacy matters and have oversight of all key functions, as required by law and guidance. Such focal points can help ensure that agency activities provide consistent privacy protections. In order to ensure that their SAOPs function effectively as central focal points for privacy management, we recommend that the Attorney General and the Secretaries of Commerce, Defense, Health and Human Services, Labor, and Treasury take steps to ensure that their SAOPs have oversight over all key privacy functions. We provided a draft of this report to OMB and to the departments and agencies we reviewed: the Departments of Commerce, Defense, Health and Human Services, Homeland Security, Justice, Labor, State, Treasury, Transportation, and Veterans Affairs, as well as the Social Security Administration and the U.S. Agency for International Development, for review and comment. Five agencies provided no comments on this draft. In comments provided via email, the Associate Deputy Assistant Secretary for Privacy and Records Management at Veterans Affairs and the Audit Management Liaison at the Social Security Administration concurred with our assessment and recommendations and provided technical comments, which we incorporated in the final report as appropriate. In oral comments, the Acting Branch Chief of the Information Policy and Technology Branch at OMB also concurred with our assessment and recommendations and provided technical comments, which we incorporated in the final report as appropriate. Commerce and Defense provided written comments that did not state whether they agreed or disagreed with our recommendations; however, both agencies stated that their privacy management structures were adequate. Their comments are reprinted in appendixes II and III respectively. Justice, Labor, and Treasury provided written comments and disagreed with our characterization of their agency SAOPs as not having oversight of all key privacy functions. Their comments are reprinted in appendixes IV, V, and VI respectively. The Chief Information Officer of the Department of Commerce stated that the department agreed with our characterization of the fragmentation that has resulted from recent laws and guidance that have significantly changed requirements for privacy oversight and management. However, she stated that applicable law does not require that the administration of the Privacy Act be consolidated with other privacy functions under the Office of the Chief Information Officer. Law and OMB guidance direct agencies to have a senior agency official, the CIO in the case of the Paperwork Reduction Act, serving as a focal point for privacy and ensuring compliance with the Privacy Act. Clearly establishing a senior official as a focal point for departmental privacy functions aligns with direction provided by law and OMB and would help ensure that the agency provides consistent privacy protections. The Senior Agency Official for Privacy at the Department of Defense stated that, while privacy responsibilities are divided among the Defense Privacy Office, the CIO, and agency components, the current privacy management structure at Defense has proven to be successful over time. We did not assess the effectiveness of the privacy management structures we reviewed. However, establishing an agency official that serves as a central focal point for departmental privacy functions aligns with direction provided by law and OMB and would help ensure that the agency provides consistent privacy protections. The Acting Chief Privacy and Civil Liberties Officer at Justice disagreed with our assessment that the department’s SAOP did not have oversight of redress procedures. He stated that the Chief Privacy and Civil Liberties Officer has statutory authority under the Violence Against Women and Department of Justice Reauthorization Act to assume primary responsibility for privacy policy and to ensure appropriate notifications regarding the department’s privacy policies and privacy-related inquiry and complaint procedures. We agree that the Chief Privacy and Civil Liberties officer has the statutory authority and responsibility for the oversight of privacy functions at Justice, including redress. However, our analysis of agency policies and procedures showed that the Chief Privacy and Civil Liberties Officer did not have an established role in oversight of redress procedures. Clearly defining the role of the Chief Privacy and Civil Liberties Officer in the departmental redress procedures would help ensure that the SAOP has oversight of this key privacy function. In its comments, the department noted that the Office of Privacy and Civil Liberties was undertaking a review of its orders and guidance to clarify and, as appropriate, strengthen existing authorities to ensure that the department implements thoroughly the Chief Privacy and Civil Liberties Officer authorities. The Chief Information Officer at Labor disagreed with our assessment that the SAOP did not have full oversight of all key privacy functions. He stated that Privacy Act compliance, redress, and training were addressed jointly by his office and the Office of the Solicitor. However, our review of Labor’s policies and procedures relating to privacy management showed that a joint oversight management structure had not been established. Rather, we found that while the CIO was responsible for three key privacy functions, the Office of the Solicitor was responsible for the remaining three functions. Clearly defining the role of the SAOP in Privacy Act compliance, redress, and training would help ensure that the SAOP has oversight of all key privacy functions. The Assistant Secretary for Management at Treasury agreed that the SAOP should have overall responsibility for privacy protection and compliance with statutory requirements and that agencies generally would likely benefit from having SAOPs that serve as central focal points for privacy matters and have oversight of all key functions. The Assistant Secretary noted that as of March 2008, the department had implemented a new privacy management structure to emphasize the importance of protecting privacy at its highest levels. However, Treasury disagreed with a statement in our draft report that it had realigned its organization in order to ensure that the SAOP had oversight of privacy functions. We recognize that privacy functions, with the exception of redress, were under the oversight of the SAOP prior to the reorganization and accordingly have deleted this statement from the final report. Treasury also disagreed that its SAOP did not have full oversight of agency redress processes, stating that the department has longstanding regulations that provide departmentwide and bureau-specific policies and procedures relating to redress. While we agree that such redress policies are in place, they do not establish a role for the SAOP. Clearly defining the role of the SAOP in the departmental redress procedures would help ensure that the SAOP has oversight of this key privacy function. Lastly, Treasury stated it submits quarterly reports to Congress on privacy complaint and redress activities. We agree that reporting is an important privacy function; however, it is separate from redress and does not constitute oversight of Treasury redress activities. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Attorney General; the Secretaries of Commerce, Defense, Health and Human Services, Homeland Security, State, Treasury, Labor, Transportation, and Veterans Affairs; the Commissioner of the Social Security Administration; and the Administrator of the U.S. Agency for International Development as well as other interested congressional committees. Copies will be made available at no charge on our Web site, www.gao.gov. If you have any questions concerning this report, please call me at (202) 512-6240 or send e-mail to koontzl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Our objectives were to (1) describe laws and guidance that set requirements for senior privacy officials within federal agencies, and (2) describe the organizational structures used by agencies to address privacy requirements and assess whether senior officials have oversight over key functions. We did not evaluate agency compliance with these laws and guidance. To address our first objective, we reviewed and analyzed relevant laws and guidance to determine privacy responsibilities for privacy officials at agencies. We reviewed relevant laws, including the Implementing Recommendations of the 9/11 Commission Act of 2007, the Homeland Security Act of 2002, and others (see app. II for a full listing), which designate senior privacy officials and assign them privacy responsibilities. We also analyzed the Paperwork Reduction Act, which has long-standing privacy requirements assigned to agency chief information officers (CIO), and the Office of Management and Budget (OMB) guidance relating to the designation of senior agency officials with privacy responsibilities, such as Memorandum M-05-08. We also analyzed the specific privacy responsibilities identified in these laws and guidance and categorized the key privacy functions they represented. To address our second objective, we identified 12 agencies (Departments of Commerce, Defense, Health and Human Services, Homeland Security, Justice, Labor, State, Treasury, Transportation, and Veterans Affairs; the Social Security Administration, and the U.S. Agency for International Development) that either have a statutorily designated privacy officer, have a central mission for which privacy protection is a critical component, or have implemented a unique organizational privacy structure. We analyzed policies and procedures at these agencies, and interviewed senior agency privacy officials to identify the privacy management structures used at each of these agencies and the roles and responsibilities of senior privacy officials. We also compared the varying management structures at these agencies to identify the differences and similarities across agencies in their implementation of these structures. Further, we analyzed agency management structures to determine whether senior privacy officials at each of these agencies had full oversight over all key functions. We conducted our work from September 2007 to May 2008, in Washington, D.C., in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following are recent laws and their major provisions regarding privacy protection responsibilities at federal agencies. Section 222 of the Homeland Security Act of 2002, as amended, instructed the secretary of DHS to appoint a senior official with primary responsibility for privacy policy, including the following: ensuring that technologies sustain, and do not erode, privacy protections; ensuring that personal information contained in Privacy Act systems of records is handled in full compliance with fair information practices as set out in the act; evaluating legislative and regulatory proposals and conducting privacy impact assessments of proposed rules; coordinating functions with the Officer for Civil Rights and Civil Liberties; preparing an annual report to Congress (without prior comment or amendment by agency heads or OMB); and having authority to investigate and having access to privacy-related records, including through subpoena in certain circumstances. Section 1011 of this act required the Director of National Intelligence to appoint a Civil Liberties Protection Officer and gave this officer the following functions: ensuring that the protection of civil liberties and privacy is appropriately incorporated into the policies and procedures of the Office of the Director of National Intelligence and the elements of the intelligence community within the National Intelligence Program; overseeing compliance by the Office of the Director of National Intelligence with all laws, regulations, and guidelines relating to civil liberties and privacy; reviewing complaints about abuses of civil liberties and privacy in Office of the Director of National Intelligence programs and operations; ensuring that technologies sustain, and do not erode, privacy protections; ensuring that personal information contained in a system of records subject to the Privacy Act is handled in full compliance with fair information practices as set out in that act; conducting privacy impact assessments when appropriate or as required performing such other duties as may be prescribed by the Director of National Intelligence or specified by law. Section 1174 of the Violence Against Women and Department of Justice Reauthorization Act of 2005 instructed the Attorney General to designate a senior official to assume primary responsibility for privacy policy, which included responsibility for advising the Attorney General in the following areas: appropriate privacy protections for the department’s existing or proposed information technology and systems; privacy implications of legislative and regulatory proposals; implementation of policies and procedures, including training and auditing, to ensure compliance with privacy-related laws and policies; that adequate resources and staff are devoted to meeting the department’s privacy-related functions and obligations; appropriate notifications regarding privacy policies and inquiry and complaint procedures; and privacy-related reports from the department to Congress and the President, including an annual report to Congress on activities affecting privacy. Section 522 of this act directed each agency with appropriations provided by the act to designate a chief privacy officer with primary responsibility for privacy and data protection policy, including ensuring that technology sustains, and does not erode, privacy and that technology used to collect or process personal information allows for continuous auditing of compliance with stated privacy policies and practices; ensuring that personal information contained in Privacy Act systems of records is handled in full compliance with fair information practices as defined in the Privacy Act; evaluating legislative and regulatory proposals and conducting privacy impact assessments of proposed rules; preparing an annual report to Congress on activities affecting privacy; ensuring the protection of personal information and information systems from unauthorized access, use, disclosure, or destruction, providing employees with privacy training; and ensuring compliance with privacy and data protection policies. This law amended the National Intelligence Reform Act of 2004 to require the heads of covered agencies to designate no less than one senior officer to serve as a privacy and civil liberties officer. This act applies to the Departments of Defense, Homeland Security, Justice, Treasury, Health and Human Services, and State, as well as the Office of the Director of National Intelligence, and the Central Intelligence Agency. The act requires the senior privacy official to perform the following functions: assisting the agency head in considering privacy and civil liberties issues with regard to anti-terrorism efforts; investigating and reviewing agency actions to ensure adequate consideration of privacy and civil liberties; ensuring that the agency has adequate redress procedures, considering privacy and civil liberties when deciding to retain or enhance coordinating activities, when relevant, with the agency Inspector General; preparing periodic reports, not less than quarterly, to the agency head, Congress, and the Privacy and Civil Liberties Oversight Board. Agencies covered under this act are also required to establish a direct reporting relationship between the senior privacy official and the agency head. Major contributors to this report were John de Ferrari, Assistant Director; Idris Adjerid; Shaun Byrnes; Matt Grote; David Plocher; Jamie Pressman; and Amos Tevelow.
Government agencies have a long-standing obligation under the Privacy Act of 1974 to protect the privacy of individuals about whom they collect personal information. A number of additional laws have been enacted in recent years directing agency heads to designate senior officials as focal points with overall responsibility for privacy. GAO was asked to (1) describe laws and guidance that set requirements for senior privacy officials within federal agencies, and (2) describe the organizational structures used by agencies to address privacy requirements and assess whether senior officials have oversight over key functions. To achieve these objectives, GAO analyzed the laws and related guidance and analyzed policies and procedures relating to key privacy functions at 12 agencies. Federal laws set varying roles and responsibilities for senior agency privacy officials. Despite much variation, all of these laws require covered agencies to assign overall responsibility for privacy protection and compliance to a senior agency official. In addition, Office of Management and Budget guidance directs agencies to designate a senior agency official for privacy with specific responsibilities. The specific privacy responsibilities defined in these laws and guidance can be grouped into six broad categories: (1) conducting privacy impact assessments (which are intended to ensure that privacy requirements are addressed when personal information is collected, stored, shared, and managed in a federal system), (2) complying with the Privacy Act, (3) reviewing and evaluating the privacy implications of agency policies, (4) producing reports on the status of privacy protections, (5) ensuring that redress procedures to handle privacy inquiries and complaints are in place, and (6) ensuring that employees and contractors receive appropriate training. The laws and guidance vary in how they frame requirements in these categories and which agencies must adhere to them. Agencies also have varying organizational structures to address privacy responsibilities. For example, of the 12 agencies we reviewed, 2 had statutorily designated chief privacy officers who also served as senior agency officials for privacy, 5 designated their agency chief information officers as their senior privacy officials, and the others designated a variety of other officials, such as the general counsel or assistant secretary for management. Further, not all of the agencies we reviewed had given their designated senior officials full oversight over all privacy-related functions. While 6 agencies had these officials overseeing all key privacy functions, 6 others relied on other organizational units not overseen by the designated senior official to perform certain key privacy functions. The fragmented way in which privacy functions were assigned to organizational units in these agencies is at least partly the result of evolving requirements in law and guidance. However, without oversight of all key privacy functions, designated senior officials may be unable to effectively serve as agency central focal points for information privacy.
FECA is administered by Labor’s Office of Workers’ Compensation Programs (OWCP) and currently covers more than 2.7 million civilian federal employees from more than 70 different agencies. FECA benefits are paid to federal employees who are unable to work because of injuries sustained while performing their federal duties. Under FECA, workers’ compensation benefits are authorized for employees who suffer temporary or permanent disabilities resulting from work-related injuries or diseases. FECA benefits include payments for (1) loss of wages when employees cannot work because of work-related disabilities due to traumatic injuries or occupational diseases; (2) schedule awards for loss of, or loss of use of, a body part or function; (3) vocational rehabilitation; (4) death benefits for survivors; (5) burial allowances; and (6) medical care for injured workers. Wage-loss benefits for eligible workers with temporary or permanent total disabilities are generally equal to either 66-2/3 percent of salary for a worker with no spouse or dependent, or 75 percent of salary for a worker with a spouse or dependent. Wage-loss benefits can be reduced based on employees’ wage-earning capacities when they are capable of working again. OWCP provides wage-loss compensation until claimants can return to work in either their original positions or other suitable positions that meet medical work restrictions. Each year, most federal agencies reimburse OWCP for wage-loss compensation payments made to their employees from their annual appropriations. If claimants return to work but do not receive wages equal to that of their prior positions—such as claimants who return to work part-time—FECA benefits cover the difference between their current and previous salaries. Currently, there are no time or age limits placed on the receipt of FECA benefits. With the passage of the Federal Employees’ Compensation Act of 1916, members of Congress raised concerns about levels of benefits and potential costs of establishing a program for injured federal employees. As Congress debated the act’s provisions in 1916 and again in 1923, some congressional members were concerned that a broad interpretation threatened to make the workers’ compensation program, in effect, a general pension. The 1916 act granted benefits to federal workers for work-related injuries. These benefits were not necessarily granted for a lifetime; they could be suspended or terminated under certain conditions. Nevertheless, the act placed no age or time limitations on injured workers’ receipt of wage compensation. The act did contain a provision allowing benefits to be reduced for older beneficiaries. The provision stated that compensation benefits could be adjusted when the wage-earning capacity of the disabled employee would probably have decreased on account of old age, irrespective of the injury. While the 1916 act did not specify the age at which compensation benefits could be reduced, the 1949 FECA amendments established 70 as the age at which a review could occur to determine if a reduction were warranted. In 1974, Congress again eliminated the age provision. Typically, federal workers participate in one of two retirement systems which are administered by the Office of Personnel Management (OPM): the Civil Service Retirement System (CSRS), or the Federal Employees’ Retirement System (FERS). Most civilian federal employees who were hired before 1984 are covered by CSRS. Under CSRS, employees generally do not pay Social Security taxes or earn Social Security benefits. Federal employees first hired in 1984 or later are covered by FERS. All federal employees who are enrolled in FERS pay Social Security taxes and earn Social Security benefits. Federal employees enrolled in either CSRS or FERS also may contribute to the Thrift Savings Plan (TSP); however, only employees enrolled in FERS are eligible for employer matching contributions to the TSP. Under both CSRS and FERS, the date of an employee’s eligibility to retire with an annuity depends on his or her age and years of service. The amount of the retirement annuity is determined by three factors: the number of years of service, the accrual rate at which benefits are earned for each year of service, and the salary base to which the accrual rate is applied. In both CSRS and FERS, the salary base is the average of the highest three consecutive years of basic pay. This is often called “high-3” pay. According to CRS, an injured employee cannot contribute to Social Security or to the TSP while receiving workers’ compensation because Social Security taxes and TSP contributions must be paid from earnings, and workers’ compensation payments are not classified as earnings under either the Social Security Act or the Internal Revenue Code. As a result, the employee’s future retirement income from Social Security and the TSP may be reduced. Legislation passed in 2003 increased the FERS basic annuity from 1 percent of the individual’s high-3 average pay to 2 percent of high-3 average pay while an individual receives workers’ compensation, which would help replace income that may have been lost from lower Social Security benefits and reduced income from TSP. Concerns that beneficiaries remain in the FECA program past retirement age have led to several proposals to change the program. Under current rules, an age-eligible employee with 30 years of service covered by FERS could accrue pension benefits that are 30 percent of their average high-3 pay and under CSRS could accrue almost 60 percent of their high-3 average pay. Under both systems benefits can be taxed. FECA beneficiaries can receive up to 75 percent of their preinjury income, tax- free, if they have dependents and 66-2/3 percent without dependents. Because returning to work could mean giving up a FECA benefit for a reduced pension amount, concerns have been raised by some that the program may provide incentives for beneficiaries to continue on the program beyond retirement age. In 1996, we reported on two alternative proposals to change FECA benefits once beneficiaries reach the age at which retirement typically occurs: (1) converting FECA benefits to retirement benefits, and (2) changing FECA wage-loss benefits to a newly established FECA annuity. The first proposal would convert FECA benefits for workers who are injured or become ill to regular federal employee retirement benefits at retirement age. In 1981, the Reagan administration proposed comprehensive FECA reform, including a provision to convert FECA benefits to retirement benefits at age 65. The proposal included certain employee protections, one of which was calculating retirement benefits on the basis of the employee’s pay at time of injury (with adjustments for regular federal pay increases). According to proponents, this change would improve agencies’ operations because their discretionary budgets would be decreased by FECA costs, and, by reducing caseload, it would allow Labor to better manage new and existing cases for younger injured workers. A bill recently introduced in Congress includes a similar provision, requiring FECA recipients to retire upon reaching retirement age as defined by the Social Security Act. The second proposal, based on proposals that several agencies developed in the early 1990s, would convert FECA wage-loss compensation benefits to a FECA annuity benefit. These agency proposals would have reduced FECA benefits by a set percentage two years after beneficiaries reached civil service retirement eligibility. Proponents of this alternative noted that changing to a FECA annuity would be simpler than converting FECA beneficiaries to the retirement system, would result in consistent benefits, and would allow benefits to remain tax-free. Proponents also argued that a FECA annuity would keep the changed benefit within the FECA program, thereby avoiding complexities associated with converting FECA benefits under CSRS and FERS. For example, converting to retirement benefits could be difficult for some employees who currently are not participating in a federal retirement plan. Also, funding future retirement benefits could be a problem if the FECA recipient has not been making retirement contributions. Labor recently suggested a change to the FECA program that would reduce wage-loss benefits for Social Security retirement-aged recipients to 50 percent of their gross salary at the date of injury, but would still be tax-free. Labor’s proposal would still keep the changed benefit within the FECA program. In our 1996 report, however, we identified a number of issues with both alternative proposals. For example, some experts and other stakeholders we interviewed noted that age discrimination posed a possible legal challenge and that some provisions in the law would need to be addressed with new statutory language. Others noted that benefit reductions would cause economic hardships for older beneficiaries. Some noted that without the protections of the workers’ compensation program, injured employees who have few years of service or are ineligible for retirement might suffer large reductions in benefits. Moreover, opponents to change also viewed reduced benefits as breaking the workers’ compensation promise. Another concern was that agencies’ anticipation of reduced costs for workers’ compensation could result in fewer incentives to manage claims or to develop safer working environments. We also discussed in our 1996 report a number of issues that merit consideration in crafting legislation to change benefits for older beneficiaries. Going forward, Congress may wish to consider the following questions as it assesses and considers current reform proposals: (1) How would benefits be computed? (2) Which beneficiaries would be affected? (3) What criteria, such as age or retirement eligibility, would initiate changed benefits? (4) How would other benefits, such as FECA medical and survivor benefits, be treated and administered? (5) How would benefits, particularly retirement benefits, be funded? The retirement conversion alternative raises complex issues, arising in part from the fact that conversion could result in varying retirement benefits, depending on conversion provisions, retirement systems, and individual circumstances. A key issue is whether or not benefits would be adjusted. The unadjusted option would allow for retirement benefits as provided by current law. The adjusted option would typically ensure that time on the FECA rolls was treated as if the beneficiary had continued to work. This adjustment could (1) credit time on FECA for years of service or (2) increase the salary base (for example, increasing salary from the time of injury by either an index of wage increases or inflation, assigning the current pay of the position, or providing for merit increases and possible promotions missed due to the injury). Determining the FECA annuity would require deciding what percentage of FECA benefits the annuity would represent. Under previous proposals benefits would be two-thirds of the previous FECA compensation benefits. Provisions to adjust calculations for certain categories of beneficiaries also have been proposed. Under previous proposals, partially disabled individuals receiving reduced compensation would receive the lesser of the FECA annuity or the current reduced benefit. FECA annuity computations could also be devised to achieve certain benchmarks. For example, the formula for a FECA annuity could be designed to approximate a taxable retirement annuity. One issue concerning a FECA annuity is whether it would be permanent once set, or whether it would be subject to adjustments based on continuing OWCP reviews of the beneficiary’s workers’ compensation claim. Currently most federal employees are covered by FERS, but conversion proposals might have to consider differences between FERS and CSRS participants, and participants in any specialized retirement systems. Other groups that might be uniquely affected include injured workers who are not eligible for federal retirement benefits, individuals eligible for retirement conversion benefits, but not vested; and individuals who are partially disabled FECA recipients but active federal employees. With regard to vesting, those who have insufficient years of service to be vested might be given credit for time on the FECA rolls until vested. There is also the question of whether changes will focus on current or future beneficiaries. Exempting current beneficiaries delays receipt of full savings from FECA cost reductions to the future. One option might be a transition period for current beneficiaries. For example, current beneficiaries could be given notice that their benefits would be changed after a certain number of years. Past proposals have used either age or retirement eligibility as the primary criterion for changing benefits. If retirement eligibility is used, consideration must be given to establishing eligibility for those who might otherwise not become retirement eligible. This would be true for either the retirement conversion or the annuity option. At least for purposes of initiating the changed benefit, time on the FECA rolls might be treated as if it counted for service time toward retirement eligibility. Deciding on the criteria that would initiate change in benefits might require developing benchmarks. For example, if age were the criteria, it might be benchmarked against the average age of retirement for federal employees, or the average age of retirement for all employees. Another question is whether to use secondary criteria to delay changed benefits in certain cases. The amount of time one has received FECA benefits is one possible example of secondary criteria. Secondary criteria might prove important in cases where an older, injured worker may face retirement under the retirement conversion option even when recovery and return to work is almost assured. In addition to changing FECA compensation benefits, consideration should be given to whether to change other FECA benefits, such as medical benefits or survivor benefits. For example, the 1981 Reagan administration proposal would have ended survivor benefits under FECA for those beneficiaries whose benefits were converted to the retirement system. Another issue to consider is who will administer benefits if program changes shift responsibilities—OPM administers retirement annuity benefits for federal employees, and Labor currently administers FECA benefits. Although it may be advantageous to consolidate case management in one agency, such as OPM, if the retirement conversion alternative were selected, the agency chosen to manage the case might have to develop an expertise that it does not currently possess. For example, OPM might have to develop expertise in medical fee schedules to control workers’ compensation medical costs. For the retirement conversion alternative, another issue is the funding of any retirement benefit shortfall. Currently, agencies and individuals do not make retirement contributions if an individual receives FECA benefits; thus, if retirement benefits exceed those for which contributions have been made, retirement funding shortfalls would occur. Retirement fund shortfalls can be funded through payments made by agencies at the time of conversion or prior to conversion. First, lump-sum payment could be made by agencies at the time of the conversion. This option has been criticized because the start-up cost was considered too high. Second, shortfalls could be covered on a pay-as-you-go basis after conversion. In this approach, agencies might make annual payments to cover the shortfall resulting from the conversions. Third, agencies’ and employees’ contributions to the retirement fund could continue before conversion, preventing shortfalls at conversion. Proposals for the FECA annuity alternative typically keep funding under the current FECA chargeback system. This is an annual pay-as-you-go system with agencies paying for the previous year’s FECA costs. In total, these five questions provide a framework for considering proposals to change the program. In conclusion, FECA continues to play a vital role in providing compensation to federal employees who are unable to work because of injuries sustained while performing their duties. However, continued concerns that the program provides incentives for beneficiaries to remain on the program at, and beyond, retirement age have led to calls for the program to be reformed. Although FECA’s basic structure has not significantly been amended for many years, there continues to be interest in reforming the program. Proposals to change benefits for older beneficiaries raise a number of important issues, with implications for both beneficiaries and federal agencies. These implications warrant careful attention to outcomes that could result from any changes. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the committee may have at this time. For further information about this testimony, please contact Daniel Bertoni at (202) 512-7215 or bertonid@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. In addition to the individual named above, key contributors to this testimony include Patrick Dibattista, H. Brandon Haller, Michelle Bracy, Tonnye Conner-White, James Rebbe, Kathleen van Gelder, and Melinda Bowman. Federal Workers’ Compensation: Better Data and Management Strategies Would Strengthen Efforts to Prevent and Address Improper Payments. GAO-08-284. Washington, D.C.: February 26, 2008. Postal Service Employee Workers’ Compensation Claims Not Always Processed Timely, but Problems Hamper Complete Measurement. GAO-03-158R. Washington, D.C.: December 20, 2002. Oversight of the Management of the Office of Workers’ Compensation Programs: Are the Complaints Justified. GAO-02-964R. Washington, D.C.: July 19, 2002. U.S. Postal Service: Workers’ Compensation Benefits for Postal Employees. GAO-02-729T. Washington, D.C.: May 9, 2002. Office of Workers’ Compensation Programs: Further Actions Are Needed to Improve Claims Review. GAO-02-725T. Washington, D.C.: May 9, 2002. Federal Employees’ Compensation Act: Percentages of Take-Home Pay Replaced by Compensation Benefits. GGD-98-174. Washington, D.C.: August 17, 1998. Federal Employees’ Compensation Act: Issues Associated With Changing Benefits for Older Beneficiaries. GGD-96-138BR. Washington, D.C.: August 14, 1996. Workers’ Compensation: Selected Comparisons of Federal and State Laws. GGD-96-76. Washington, D.C.: April 3, 1996. Federal Employees’ Compensation Act: Redefining Continuation of Pay Could Result in Additional Refunds to the Government. GGD-95-135. June 8, 1995. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses issues related to possible changes to the Federal Employees' Compensation Act (FECA) program, a topic that we have reported on in the past. At the end of chargeback year 2010, the FECA program, administered by the Department of Labor (Labor) paid more than $1.88 billion in wage-loss compensation, impairment, and death benefits, and another $898.1 million for medical and rehabilitation services and supplies. Currently, FECA benefits are paid to federal employees who are unable to work because of injuries sustained while performing their federal duties, including those who are at or older than retirement age. Concerns have been raised that federal employees on FECA receive benefits that could be more generous than under the traditional federal retirement system and that the program may have unintended incentives for beneficiaries to remain on the FECA program beyond the traditional retirement age. Over the past 30 years, there have been various proposals to change the FECA program to address this concern. Recent policy proposals to change the way FECA is administered for older beneficiaries share characteristics with past proposals we have discussed in prior work. In August 1996, we reported on the issues associated with changing benefits for older beneficiaries. Because FECA's benefit structure has not been significantly amended in more than 35 years, the policy questions raised in our 1996 report are still relevant and important today. This statement will focus on (1) previous proposals for changing FECA benefits for older beneficiaries and (2) questions and associated issues that merit consideration in crafting legislation to change benefits for older beneficiaries. This statement is drawn primarily from our 1996 report in which we solicited views from selected federal agencies and employee groups to identify questions and associated issues with crafting benefit changes. In that report, we also reviewed relevant laws and analyzed previous studies and legislative proposals that would have changed benefits for older FECA beneficiaries.. In summary, we have reported that the perception that many retirement-age beneficiaries were receiving more generous benefits on FECA had generated two alternative proposals to change benefits once beneficiaries reach the age at which retirement typically occurs: (1) converting FECA benefits to retirement benefits and, (2) changing FECA wage-loss benefits by establishing a new FECA annuity. We also discussed a number of issues to be considered in crafting legislation to change benefits for older beneficiaries. Going forward, Congress may wish to consider the following questions in assessing current proposals for change: (1) How would benefits be computed? (2) Which beneficiaries would be affected? (3) What criteria, such as age or retirement eligibility, would initiate changed benefits? (4) How would other benefits, such as FECA medical and survivor benefits, be treated and administered? (5) How would benefits, particularly retirement benefits, be funded?
DOD is perhaps the largest and most complex organization in the world and spends billions of dollars each year to maintain key business operations intended to support the warfighter, including systems and processes related to the management of contracts, finances, the supply chain, support infrastructure, and weapons systems acquisition. We have reported for years that inefficiencies in these business operations result in reduced efficiencies, ineffective performance, inadequate accountability, and lack of transparency. Despite various reform initiatives, DOD continues to face weaknesses in business operations that not only adversely affect the reliability of reported financial data, but also the economy, efficiency, and effectiveness of DOD’s operations. To address long-standing management problems, we began our “high-risk” program in 1990 to identify and help resolve serious weaknesses in areas that involve substantial resources and provide critical services to the public. Historically, high-risk areas have been designated because of traditional vulnerabilities related to their greater susceptibility to fraud, waste, abuse, and mismanagement. As our high-risk program has evolved, we have increasingly used the high-risk designation to draw attention to areas associated with broad-based transformation needed to achieve greater economy, efficiency, effectiveness, accountability, and sustainability of selected key government programs and operations. For example, we first added DOD’s overall approach to business transformation to our high-risk list in 2005 because DOD had not taken the necessary steps to achieve and sustain business reform on a broad, strategic, departmentwide, and integrated basis. Furthermore, DOD continues to dominate the high-risk list. Specifically, DOD currently bears responsibility, in whole or in part, for 15 of our 27 high-risk areas. Of the 15 high-risk areas, the 8 DOD-specific high-risk areas cut across all of DOD’s major business areas. Table 1 lists the 8 DOD-specific high-risk areas. Also, as shown in table 1, many of these management challenges have been on the high-risk list for a decade or more. In addition, DOD shares responsibility for 7 governmentwide high-risk areas. Collectively, these high-risk areas relate to most of DOD’s major business operations that directly support the warfighter, including how servicemembers get paid, the benefits provided to their families, and the availability and condition of the equipment they use both on and off the battlefield. Congress passed legislation that codified many of our prior recommendations related to DOD business systems modernization; this includes the establishment of various bodies and plans. Also as required by Congress, DOD commissioned studies examining the feasibility and advisability of establishing a CMO to oversee the department’s business transformation process. As part of this effort, the Defense Business Board, an advisory panel, examined various options and, in May 2006, endorsed the CMO concept. In December 2006, the Institute for Defense Analyses also endorsed the need for a CMO position at DOD. In May 2007, DOD submitted a letter to Congress outlining its position regarding a CMO at DOD, stating that the Deputy Secretary of Defense should assume the CMO responsibilities. Although DOD has made progress in establishing a management framework upon which to develop overall business transformation efforts, this framework currently focuses on business systems modernization rather than broader business transformation efforts. Congress included provisions in the National Defense Authorization Acts for Fiscal Years 2005 and 2006 to assist DOD in addressing financial management and business systems modernization challenges—two of our high-risk areas— and DOD’s leadership has taken steps to comply with these provisions. For example, to improve financial management, DOD issued the initial Financial Improvement and Audit Readiness Plan in December 2005, which was last updated in June 2007, to guide financial improvement and financial audit efforts within the department. Also, to address its business systems modernization challenges, DOD has established the following: Defense Business Systems Management Committee: The Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 required DOD to set up a committee to review and approve major updates of the defense business enterprise architecture—or transformation blueprint— and the obligation of funds for defense systems modernization. Prior to the enactment of this legislation, we reported that DOD had not established a governance structure and the process controls needed to ensure ownership and accountability of business systems investments. Subsequently, Congress directed DOD to establish the Defense Business Systems Management Committee to oversee DOD business transformation. In February 2005, the Deputy Secretary of Defense chartered the Defense Business Systems Management Committee, which consists of senior defense military and civilian leaders. The Deputy Secretary of Defense serves as the chair of this committee and the Under Secretary of Defense for Acquisition, Technology, and Logistics serves as the vice chair of the committee. The committee is intended to establish strategic direction and plans for DOD’s business mission, oversee implementation of systemic performance in DOD’s business operations, approve business transformation plans and initiatives, ensure that funds are obligated for defense business systems modernization in accordance with the law, and recommend policies and procedures to the Secretary of Defense that enable efficient business operations throughout DOD. Investment review boards: The Ronald W. Reagan National Defense Authorization Act also required DOD to set up investment review boards to evaluate systems’ consistency with the business enterprise architecture and to provide oversight of the investment review process for business systems. Prior to the establishment of investment review boards, we had reported that billions of dollars were being spent on business systems investments with little oversight. DOD established the investment review boards in 2005 to serve as the oversight and investment decision-making bodies for business system investments in their respective areas of responsibility. These boards assess modernization investments over $1 million and determine how the investments will improve processes and support the warfighter. Business Transformation Agency: DOD established the Business Transformation Agency in October 2005 with the intent for it to support the Defense Business Systems Management Committee and coordinate business transformation by ensuring adoption of DOD-wide information and process standards as defined in the business enterprise architecture. The Business Transformation Agency reports to the Under Secretary of Defense for Acquisition, Technology, and Logistics in his capacity as the vice chair of the Defense Business Systems Management Committee. The Business Transformation Agency’s charter includes responsibilities such as identifying urgent warfighter needs that can be addressed by business solutions, articulating the strategic vision for business transformation, exercising executive oversight for DOD-wide programs, and implementing plans and tools needed to achieve DOD business transformation. In addition, the department has developed various tools and plans to enable these entities to manage its business systems modernization efforts. The tools and plans the Defense Business Systems Management Committee approves, the Business Transformation Agency implements, and the investment review boards use to assess compliance include the following: Business enterprise architecture: DOD’s business enterprise architecture is a tool or a blueprint to guide and constrain investments in DOD organizations and systems as they relate to business operations. The business enterprise architecture provides the thin layer of corporate policies, capabilities, standards, and rules and focuses on providing tangible outcomes for a limited set of enterprise-level (DOD-wide) priorities, and the components are responsible under the department’s tiered accountability approach for defining their respective component- level architectures that are aligned with the corporate business enterprise architecture. According to DOD, subsequent releases of the business enterprise architecture will continue to reflect this federated approach and will define enforceable interfaces to ensure interoperability and information flow to support decision making at the appropriate levels. Enterprise transition plan: DOD guidance states that the enterprise transition plan is intended to lay out a road map for achieving DOD’s business transformation by implementing changes to technology, processes, and governance consistent with DOD’s business enterprise architecture. According to DOD, the enterprise transition plan is intended to summarize all levels of transition planning information (milestones, metrics, resource needs, and system migrations) as an integrated product for communicating and monitoring progress—resulting in a consistent framework for setting priorities and evaluating plans, programs, and investments. The enterprise transition plan contains time-phased milestones, performance metrics, and a statement of resource needs for new and existing systems that are part of the business enterprise architecture. Business Transformation Agency officials said that they see the enterprise transition plan as the highest level plan for DOD business transformation. DOD released its first enterprise transition plan in September 2005. DOD updates the enterprise transition plan twice a year, once in March as part of DOD’s annual report to Congress and once in September. While our prior work has acknowledged this progress, we also have reported on limitations. For example, while the latest version of the business enterprise architecture focuses on DOD-wide corporate policies, capabilities, rules, and standards, which are essential elements to meeting legislative requirements, this version has yet to be augmented by the DOD component organizations’ subsidiary architectures that are also essential to meeting these requirements and the department’s goal of having a federated family of architectures. While the latest version of the enterprise transition plan provides performance measures for the enterprise and component programs, including key milestones (such as initial operating capability), it does not include other important information needed to understand the sequencing of these business investments and does not address DOD’s complete portfolio of business system investments. While the department has established and begun implementing the investment review structures and processes that are consistent with legislation, it has yet to fully define the related portfolio-based information technology investment management practices. Furthermore, DOD’s efforts have been mainly focused on business systems modernization. During our review, we examined key documents, such as DOD’s enterprise transition plan, business transformation guidance, and minutes from the meetings of the Defense Business Systems Management Committee, and our analysis found that DOD has not yet expanded the focus beyond business systems. In addition, DOD officials stated that the Defense Business Systems Management Committee has mainly focused on providing oversight for business systems investments, rather than overall business transformation efforts, because this is what legislation has required it to do. Similarly, DOD officials stated that the enterprise transition plan also is focused on business systems and does not provide enough detail about overall business transformation. DOD officials added that the Business Transformation Agency is also limited to focusing mainly on business systems because its role is to support the Defense Business Systems Management Committee, which primarily provides oversight for business systems initiatives as specified in the Ronald W. Reagan National Defense Authorization Act. Additionally, DOD has not clearly defined or institutionalized interrelationships, roles and responsibilities, or accountability for establishing a management framework for overall business transformation. For example, the Deputy Secretary of Defense chairs an advisory board called the Deputy’s Advisory Working Group, which DOD officials have stated has a role in overall business transformation. The Deputy’s Advisory Working Group started in 2006 as an ad hoc committee, co-chaired by the Deputy Secretary of Defense and Vice Chairman of the Joint Staff, to manage the planning process for DOD’s strategic plan, the Quadrennial Defense Review. According to DOD officials, this working group is to provide departmentwide strategic direction on various issues that it chooses. Many of the same individuals who sit on the Defense Business Systems Management Committee also serve on the Deputy’s Advisory Working Group. However, opinions differ within DOD as to whether the committee or the working group will function as the primary body responsible for overall business transformation, and the relationship between these two entities has not been formalized. In addition, opinions differ between the two entities regarding the definition of DOD’s key business areas, with the Defense Business Systems Management Committee and the Business Transformation Agency using a broader definition of business processes than the Deputy’s Advisory Working Group and its supporting organizations. These differences hinder DOD’s ability to leverage the business systems modernization management framework to fully address broader business transformation efforts. Until the department institutionalizes a management framework that encompasses all aspects of business transformation, including establishing overall responsibility for and defining what is included in business transformation, DOD will be unable to integrate related initiatives into a sustainable, enterprisewide approach and to resolve weaknesses in business operations that we have shown are at high risk of waste, fraud, and abuse. DOD faces two critical challenges to achieving successful business transformation. First, DOD does not have a comprehensive, integrated, and enterprisewide plan or set of linked plans supported by a planning process that sets a strategic direction for overall business transformation efforts and monitors progress. Second, DOD lacks a full-time leadership position dedicated solely to the planning, integration, and execution of business transformation efforts. Until the department establishes a comprehensive, integrated planning process and establishes full-time sustained leadership, DOD will be challenged to integrate related initiatives into a sustainable, enterprisewide approach and to resolve weaknesses in business operations that we have shown are at high risk of waste, fraud, and abuse. DOD continues to be challenged in its business transformation efforts because it has not developed a comprehensive, integrated, and enterprisewide action plan or set of linked plans for business transformation that is supported by a comprehensive planning process. Such a plan or set of plans would help set strategic direction for overall business transformation efforts, prioritize initiatives and resources, and monitor progress through the establishment of performance goals, objectives, and rewards. Our prior work has shown that this type of plan should cover all of DOD’s key business functions; contain results-oriented goals, measures, and expectations that link institutional, unit, and individual performance goals and expectations to promote accountability; and establish an effective process and related tools for implementation and oversight. Furthermore, such an integrated business transformation plan would be instrumental in establishing investment priorities and guiding the department’s key resource decisions. Our analysis shows that DOD does not have an integrated plan in place and has not fully developed a comprehensive planning process. For example, we analyzed the enterprise transition plan and determined that the goals and objectives in the enterprise transition plan were not clearly linked to the goals and objectives in the Quadrennial Defense Review, DOD’s highest level strategic plan. In addition, the enterprise transition plan is not based on a strategic planning process. For example, it does not provide a complete assessment of DOD’s progress in overall business transformation efforts aside from business systems modernization. Furthermore, while the enterprise transition plan contains goals and milestones related to business systems, the plan does not contain results- oriented goals and measures that assess overall business transformation. Finally, we determined that DOD’s business transformation efforts are currently guided by multiple plans that are developed and maintained by various offices within DOD. DOD officials acknowledged our analysis that DOD does not have an integrated plan in place. Business Transformation Agency officials see the enterprise transition plan as the highest level plan for business transformation but acknowledge that it does not currently provide an assessment of the department’s overall approach to business transformation. Business Transformation Agency officials also acknowledged that they are challenged to work across various offices to develop an integrated planning process and results-oriented measures to assess overall business transformation. These officials added that DOD is starting to develop a family of linked plans to guide and monitor business transformation. Specifically, DOD’s March 2007 update to the enterprise transition plan includes an approach that is intended to align other business plans with the enterprise transition plan, establish working relationships among plan owners across DOD’s major business areas, and identify interdependencies among their products. However, according to Business Transformation Agency officials and others within DOD, the alignment currently involves only ensuring data consistency across DOD’s major business plans and does not yet encompass the full integration they envision. In addition, it is not clear from discussions with these officials which committee or office within DOD will be responsible for developing a family of linked plans and a supporting comprehensive planning process. The Defense Science Board, the Defense Business Board, and the Institute for Defense Analyses agree with our analysis. These organizations have issued reports supporting DOD’s need for an integrated planning process for business transformation. In a February 2006 report on military transformation, the Defense Science Board concluded that DOD needed, but did not have, a multiyear business plan capable of relating resources to mission purposes. In addition, the report said that confusion existed over roles in identifying needs, proposing and choosing solutions, executing programs, and overseeing performance. The Defense Science Board concluded that an effective business plan would give decision makers a clear understanding of the impact of resource decisions. The Defense Business Board arrived at a similar conclusion. In a May 2006 report on governance at DOD, the Defense Business Board reported that a challenge facing DOD’s business activities was the move from a hierarchical, functional approach to an enterprisewide, cross-functional, horizontal approach. The Defense Business Board recommended that DOD develop a strategic plan that contains clear goals and supporting objectives, including outcome-based metrics. In a December 2006 report about the need for a CMO at DOD, the Institute for Defense Analyses recommended that DOD adopt a planning structure that would ensure that the strategic-level directions and priorities drive day-to-day planning and execution. The Institute for Defense Analyses said that the planning structure should contain top-level goals, approaches, and resources and link these goals to the required resources within the executing activities. DOD continues to lack sustained leadership focused solely on business transformation. We have reported that as DOD and other agencies embark on large-scale organizational change initiatives, similar to defense business transformation, there is a compelling need to, among other things, (1) elevate attention on management issues and transformational change efforts, (2) integrate various key management and transformation efforts into a coherent and enterprisewide approach, and (3) institutionalize accountability for addressing transformation needs and leading change. Without such leadership, DOD is at risk of not being able to sustain and ensure the success of its overall business transformation efforts, and its progress is at risk of being another in a long line of unsuccessful management reform initiatives. The Deputy Secretary of Defense has elevated the attention paid to business transformation efforts, and he and other senior leaders have clearly shown a commitment to business transformation and to addressing deficiencies in the department’s business operations. For example, the Deputy Secretary has been actively engaged in monthly meetings of both the Defense Business Systems Management Committee and the Deputy’s Advisory Working Group, and directed the creation of the Business Transformation Agency to support the Defense Business Systems Management Committee. However, these organizations do not provide the sustained leadership needed to successfully achieve overall business transformation. The Defense Business Systems Management Committee’s representatives consist of political appointees whose terms expire when administrations change and the roles of the Deputy’s Advisory Working Group have not been institutionalized in DOD directives or charters. Without this, the committee’s very existence and role could change within or between administrations. A broad-based consensus exists among GAO and others that the status quo is unacceptable and that DOD needs a CMO to provide leadership over business transformation efforts, although there are different views concerning the characteristics of a CMO, such as whether the position should be codified in statute, established as a separate position from the Deputy Secretary of Defense, designated as Executive Level II or Level III, subject to a term appointment, or supported by a deputy CMO. As required by Congress, DOD commissioned studies of the feasibility and advisability of establishing a deputy secretary of defense for management to oversee the department’s business transformation process. As part of this effort, the Defense Business Board, an advisory panel, examined various options and, in May 2006, endorsed the CMO concept. Furthermore, in December 2006, the Institute for Defense Analyses issued a study that reported on various options for the creation of a CMO position and recommended that a CMO is needed at DOD. In response to the Institute for Defense Analyses report, DOD submitted a letter to Congress in May 2007 outlining the department’s position on a CMO at DOD. However, this position does not adequately address the key leadership challenge that we discuss in this report—that is, the lack of a senior leader, at the right level, with appropriate authority, to focus full time on overall business transformation. In summary, DOD is proposing to Congress that the role of a CMO be assigned to the Deputy Secretary of Defense. While the Deputy Secretary may be at the right level, with the appropriate authority and responsibility to transform business operations, we have testified that the demands placed on him and other senior leaders make it difficult for them to maintain the oversight, focus, and momentum needed to resolve business operational weaknesses, including the high-risk areas. Finally, DOD does not agree with codifying the CMO role in legislation, stating that doing so would restrict the flexibility of future Presidents and Secretaries of Defense to build an integrated management team. DOD would rather leave the assignment of the CMO role to the discretion of the Secretary of Defense, and DOD plans to formalize the Deputy Secretary’s CMO and business transformation duties in a DOD directive. Because of the complexity and long-term nature of business transformation, we have long advocated the establishment of a CMO position at DOD with significant authority and experience and a term that would provide sustained leadership and the time to integrate the department’s overall business transformation efforts. Major transformation initiatives often take at least 5 to 7 years in large private and public sector organizations. Codifying a separate, full-time CMO position in statute would ensure continuity and help to create unambiguous expectations and underscore congressional desire to follow a professional, nonpartisan, sustainable, and institutional approach to this position. Without formally designating responsibility and accountability for results, reconciling competing priorities among various organizations and prioritizing investments will be difficult and could impede the department’s progress in addressing deficiencies in key business areas. A full-time and separate CMO position could devote the necessary time and effort to further and sustain DOD’s progress and would be accountable for planning, integrating, and executing the department’s overall business transformation efforts. Further, we believe that the CMO should be at Executive Level II and report directly to the Secretary of Defense so that the position has the stature needed to successfully address integration challenges, address DOD’s high-risk areas with a strategic and systematic approach, and prioritize investments across the department. By subsuming the CMO duties within the Deputy Secretary of Defense position as DOD advocates, the CMO would be at level II, but not subject to a term or able to focus full-time attention on business transformation. Finally, we advocate an extended term appointment for the CMO of at least 5 to 7 years so that the position could span administrations to sustain business transformation when key personnel changes occur. DOD’s efforts at business transformation consist of various entities whose interrelationships are not clearly articulated and numerous plans that are not integrated across the department. Currently, there is no single individual, office, or integrated plan within DOD to provide a complete and focused assessment of the department’s business transformation efforts. DOD continues to face formidable challenges, both externally with its ongoing military operations and internally with the long-standing problems of fraud, waste, and abuse. Pervasive, decades-old management problems related to its business operations affect all of DOD’s major business areas. While DOD has taken positive steps to address these problems, our previous work has shown a persistent pattern of limited scope of focus and a lack of integrated planning and sustained leadership. In this time of growing fiscal constraints, every dollar that DOD can save through improved economy and efficiency of its operations is important to the well-being of our nation and the legitimate needs of the warfighter. DOD can no longer afford to address business transformation as it has in the past. Unless DOD elevates and integrates its efforts, billions of dollars will continue to be wasted every year. Furthermore, without strong and sustained leadership, both within and across administrations, DOD will likely continue to have difficulties in maintaining the oversight, focus, and momentum needed to implement and sustain the needed reforms to its business operations. In this regard, we continue to believe that a CMO whose sole focus is to integrate and oversee the overall transformation of the department’s business operations remains key to DOD’s success. To ensure successful and sustained business transformation at DOD, we recommend that the Secretary of Defense take the following two actions: Institutionalize in directives the roles, responsibilities, and relationships among various business-related entities and committees, such as the Defense Business Systems Management Committee, investment review boards, the Business Transformation Agency, and the Deputy’s Advisory Working Group, and expand the management framework to capture overall business transformation efforts, rather than limit efforts to modernizing business systems. Develop a comprehensive strategic planning process for business transformation that results in a comprehensive, integrated, and enterprisewide plan or set of interconnected functional plans that covers all key business areas and provides a clear strategic direction, prioritizes initiatives, and monitors progress across the department. Given DOD’s view that the Deputy Secretary of Defense should be assigned CMO duties, Congress should consider enacting legislation to establish a separate, full-time position at DOD with the significant authority and experience and a sufficient term to provide focused and sustained leadership and momentum over business transformation efforts. In written comments on a draft of this report, DOD generally concurred with our recommendations that the department institutionalize a management framework and develop a comprehensive strategic planning process for business transformation, and disagreed with our matter for congressional consideration that Congress enact legislation to establish a separate, full-time CMO position. The department’s comments are reprinted in appendix II. In its overall comments, DOD expressed concern about what it characterized as GAO’s belief that the department placed improper emphasis on business systems modernization to the detriment of overall business transformation efforts. In particular, DOD stated that business systems modernization is a critical step in achieving overall business transformation, and that lessons learned and governance structures developed for modernizing business systems acquisition processes are being evaluated for implementation beyond the business side. It further stated that the Deputy’s Advisory Working Group and the Defense Business Systems Modernization Committee both focus more broadly on defense business transformation. DOD also believed we had overstated the nature of “broad-based consensus” between GAO, the Institute for Defense Analyses, and the Defense Business Board about the need for a CMO in DOD, noting that the Institute for Defense Analyses had examined four alternate methods for institutionalizing the roles of the CMO and ultimately supported the department’s position that those duties be vested in the Deputy Secretary of Defense. We disagree with DOD’s characterization of our report with respect to the emphasis of the department’s efforts and the nature of the broad-based consensus on the need for a CMO. The report specifically gives DOD credit for progress to date on setting up an overall framework for broader business transformation, and in no way suggests that any specific steps taken regarding modernizing business systems are detrimental to this progress. Rather, we note that the framework, as currently structured and implemented, focuses on business systems, is a foundation to build upon, and needs to be expanded to more fully address broader transformation issues. The report also recognizes the establishment of the Deputy’s Advisory Working Group and the Defense Business Systems Modernization Committee. While DOD suggests these two groups focus more broadly on business transformation, our work shows that DOD has not clearly defined or institutionalized interrelationships, roles and responsibilities, or accountability for broader business transformation among these entities. Also, differences of opinion exist within DOD about the roles and scope of the various entities. Further, contrary to DOD’s view, we did not overstate the nature of the “broad-based consensus” regarding the need for a CMO. In fact, the Defense Business Board, Institute for Defense Analyses, and the department are on record in their support for establishing a CMO at DOD. Specifically, the board endorsed the CMO concept in a study completed in May 2006, the Institute for Defense Analyses identified the need for a CMO in its study completed in December 2006, and DOD, in a May 2007 letter, informed Congress of its view that the Deputy Secretary of Defense should assume CMO responsibilities. The Institute for Defense Analyses also recommended that Congress establish a new deputy CMO position with an Executive Level III term appointment of 7 years to provide full-time support to the Deputy Secretary in connection with business transformation issues. We believe these actions demonstrate a broad-based consensus regarding the need for a CMO and, therefore, that the status quo is unacceptable. Notwithstanding these positions, we also recognize, as stated in the report, that there are different views concerning the characteristics of a CMO, such as whether the position should be codified in statute, established as a separate position from the Deputy Secretary, designated as Executive Level II or Level III, or subject to a term appointment. As stated in this report and numerous testimonies, we believe the CMO position should be codified in statute as a separate and full-time position, designated as Executive Level II, and subject to an extended term appointment. In addition to its overall comments, DOD provided detailed comments on our two recommendations. Specifically, DOD concurred with our first recommendation that the department institutionalize in directives the roles, responsibilities, and relationships among various business-related entities and committees and expand the management framework beyond business systems modernization to capture overall business transformation efforts. In fact, DOD stated explicitly in its comments that the department is a strong advocate for institutionalizing, in its DOD Directives System, the functions, responsibilities, authorities, and relationships of its principal officials and the management processes they oversee. DOD added that the Deputy Secretary of Defense has issued a directive-type memorandum on the management of the Deputy’s Advisory Working Group and that a draft DOD directive has been prepared to define the functions of the Defense Business Systems Management Committee and elaborate its relationships with the Defense Business Transformation Agency and other key business-related entities in the department. We recognize that directives and memorandums, in some cases, do exist, and that DOD plans to finalize additional directives, particularly for the Defense Business Systems Management Committee. As noted in our report, during the course of our review, we found that DOD has not clearly defined or institutionalized interrelationships, roles and responsibilities, or accountability for establishing a management framework for overall business transformation, and that differences of opinion exist within the department regarding which of the various senior leadership committees will function as the primary body responsible for overall business transformation. Therefore, we encourage DOD to ensure that its efforts to institutionalize its management framework for business transformation in directives specifically address these matters, and once directives are finalized, to take steps to clearly communicate the framework and reinforce its implementation throughout the department. Further, DOD partially concurred with our second recommendation that the Secretary of Defense develop a comprehensive strategic planning process for business transformation that results in a comprehensive, integrated, and enterprisewide plan or set of plans. Specifically, DOD stated that it has already begun to expand the scope of the enterprise transition plan to become a more robust enterprisewide planning document and to evolve this plan into the centerpiece strategic document for transformation. DOD added that as the enterprise transition plan evolves, it will continue to improve in aligning strategy with outcomes, identifying business capability gaps, prioritizing future needs, and developing metrics to measure achievement. DOD also stated that it will continue to evolve its family of plans to address our recommendation. While DOD’s proposed actions to address both of our recommendations appear to be positive steps, the key to their success will be in the details of their implementation. Moreover, we continue to believe that these efforts alone will not be sufficient to bring about the desired transformation. More specifically, efforts to institutionalize and broaden the scope of a management framework and develop a comprehensive strategic planning process for business transformation will not be successful without a CMO to guide and sustain these efforts. However, DOD disagreed with our matter for congressional consideration that Congress consider enacting legislation to establish a separate, full- time CMO position at DOD to provide focused and sustained leadership and momentum over business transformation efforts, stating that no official below the Secretary of Defense, except the Deputy Secretary, has the rank and perspective to provide the strategic leadership and authoritative decision making necessary to ensure implementation of departmentwide business activities. DOD stated that the Deputy Secretary of Defense is to be designated as the CMO and that an internal directive is being revised to that effect. DOD also stated its belief that the continuity of business transformation is best ensured by institutionalized processes and organizations, the knowledge and perspective of DOD’s career workforce, clear and mutually agreed to economy and efficiency goals, and the due diligence of future administrations and Members of Congress to nominate and confirm highly qualified executives to serve at DOD. Further, DOD stated that the establishment of an additional official at the under secretary level to lead business transformation would generate dysfunctional competition among the five other Under Secretaries by creating confusion and redundancy in their roles and responsibilities. DOD added that the Deputy Secretary of Defense as the CMO has sufficient officials available to assist in managing the department and the authority necessary to refine the department’s management structure to continue business management reform and integrate business transformation activities with the operational work of the department. Because of the complexity and long-term nature of business transformation, we have consistently reported and testified that DOD needs a CMO with significant authority and experience, a term that would provide sustained leadership, and the time to integrate overall business transformation efforts. In our view, DOD’s plan to subsume the CMO duties within the Deputy Secretary of Defense position and to establish this action by directive would place the responsibilities at the appropriate level—Executive Level II—but would result in a position not subject to a term or able to focus full-time attention on business transformation. Transformation is a long-term process, especially for large and complex organizations such as DOD. Therefore, a term of at least 5 to 7 years is recommended to provide sustained leadership and accountability. To ensure continuity, it should become a permanent position, with the specific duties authorized in statute. As stated in our report, we believe codifying a separate, full-time CMO position in statute would also help to create unambiguous expectations and underscore congressional desire to follow a professional, nonpartisan, sustainable, and institutional approach to this position. We recognize that the Deputy Secretary of Defense has officials and institutional structures available to support the transformation process; however, transformation cannot be achieved through a committee approach. Ultimately, a person at the right level, with the right type of experience, in a full-time position with a term appointment, and with the proper amount of responsibility, authority, and accountability is needed to lead the effort. Contrary to DOD’s view, we believe the establishment of a separate CMO position would bring leadership, accountability, focus, and direction to the department’s efforts rather than creating dysfunctional competition and causing confusion. The CMO would not assume the responsibilities of the Under Secretaries of Defense or any other officials. Rather, the CMO would be responsible and accountable for planning, integrating, and executing the department’s overall business transformation effort, and would be able to give full-time attention to business transformation. As such, the CMO would be a key ally to other officials in the department in dealing with the business transformation process. Without formally designating responsibility and accountability for results, reconciling competing priorities among various organizations and prioritizing investments will be difficult and could impede progress in addressing deficiencies in key business areas. We believe DOD’s position essentially represents the status quo, and that in the interest of the department and American taxpayers, the department needs a CMO to help transform its key business operations and avoid billions of dollars in waste each year. We are encouraged that this matter is now before Congress as it prepares to deliberate on pending legislation that calls for statutorily establishing a CMO for DOD. In particular, we believe any resulting legislation should include some important characteristics for the CMO position. Specifically, a CMO at DOD should be codified in statute as a separate and full-time position that is designated as an Executive Level II appointment and reports directly to the Secretary of Defense so that the individual in this position has the stature needed to successfully address integration challenges, adjudicate disputes, and monitor progress on overall business transformation across defense organizations. In addition, the position should be subject to an extended term appointment such that the CMO would span administrations to sustain transformation efforts when key personnel changes occur. Transformation is a long-term process, especially for large and complex organizations such as DOD. Therefore, a term of at least 5 to 7 years is recommended to provide sustained leadership and accountability. In addition, we would recommend a requirement for advance notification should the Secretary decide to remove an individual from the CMO position. We are sending copies of this report to interested congressional committees and the Secretary of Defense. We will also make copies available to others upon request. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-9619 or pickups@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other staff members who made key contributions to this report are listed in appendix III. To assess the progress the Department of Defense (DOD) has made in setting up a management framework for business transformation, we reviewed and analyzed relevant documents and current literature about the department’s business transformation and interviewed key DOD senior leaders and defense experts. Documents that we used for our review included, but were not limited to, (1) GAO reports related to DOD’s high- risk areas, including business systems modernization, development of the business enterprise architecture, and organizational transformation; (2) DOD products, including the 2006 Quadrennial Defense Review and updates to DOD’s enterprise transition plan; (3) DOD’s annual reports on business transformation to Congress (and biannual updates); (4) DOD testimony to Congress on the status of business transformation; and (5) meeting minutes and briefing documents, such as those from the Defense Business Systems Management Committee, the Deputy’s Advisory Working Group, and the Defense Business Board, related to DOD’s business transformation, governance, and management reforms. We obtained testimonial evidence from officials representing the Business Transformation Agency, offices within the Office of the Secretary of Defense (including the Program Analysis and Evaluation Directorate; Office of the Director, Administration and Management; and the Office of Business Transformation), the Joint Staff, the military departments, and defense experts. To assess the challenges DOD faces in maintaining and ensuring success in its overall business transformation efforts, we compared DOD’s efforts to key practices we found to be consistently at the center of successful organizational mergers and transformations, specifically, establishing a coherent mission and integrated strategic goals to guide the transformation and ensuring that top leadership drives the transformation. We also reviewed relevant plans and related documents to assess integration among DOD’s various business-related plans. These plans included DOD’s Quadrennial Defense Review, Performance and Accountability Report, Financial Improvement and Audit Readiness Plan, Defense Acquisition Transformation Report to Congress, Supply Chain Management Improvement Plan, Focused Logistics Joint Functional Concept and the Focused Logistics Campaign Plan, Human Capital Strategy, and the Defense Installations Strategic Plan. In addition, we reviewed proposals for a chief management officer (CMO) at the department and obtained testimonial evidence from key DOD officials and defense experts. As part of this effort, we considered comments raised by several public and private sector officials during a forum sponsored by the Comptroller General in April 2007. The purpose of this forum was to discuss the merits of a CMO or chief operating officer concept. We also analyzed congressionally mandated CMO reports prepared by the Defense Business Board and the Institute for Defense Analyses and reviewed DOD’s response to the study prepared by the Institute for Defense Analyses. We conducted our work from September 2006 through July 2007 in accordance with generally accepted government auditing standards. In addition to the contact named above, David Moser, Assistant Director; Thomas Beall; Renee Brown; Donna Byers; Grace Coleman; Gina Flacco; Barbara Lancaster; Julia Matta; and Suzanne Perkins made key contributions to this report. DOD Business Systems Modernization: Progress Continues to Be Made in Establishing Corporate Management Controls, but Further Steps Are Needed. GAO-07-733. Washington, D.C.: May 14, 2007. Business Systems Modernization: DOD Needs to Fully Define Policies and Procedures for Institutionally Managing Investments. GAO-07-538. Washington, D.C.: May 11, 2007. DOD Transformation Challenges and Opportunities. GAO-07-500CG. Washington, D.C.: February 12, 2007. Business Systems Modernization: Strategy for Evolving DOD’s Business Enterprise Architecture Offers a Conceptual Approach, but Execution Details Are Needed. GAO-07-451. Washington, D.C.: April 16, 2007. High-Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. Defense Business Transformation: A Comprehensive Plan, Integrated Efforts, and Sustained Leadership Are Needed to Assure Success. GAO-07-229T. Washington, D.C.: November 16, 2006. Department of Defense: Sustained Leadership Is Critical to Effective Financial and Business Management Transformation. GAO-06-1006T. Washington, D.C.: August 3, 2006. Business Systems Modernization: DOD Continues to Improve Institutional Approach, but Further Steps Needed. GAO-06-658. Washington, D.C.: May 15, 2006. GAO’S High-Risk Program. GAO-06-497T. Washington, D.C.: March 15, 2006. Defense Management: Additional Actions Needed to Enhance DOD’s Risk-Based Approach for Making Resource Decisions. GAO-06-13. Washington, D.C.: November 15, 2005. Defense Management: Foundational Steps Being Taken to Manage DOD Business Systems Modernization, but Much Remains to be Accomplished to Effect True Business Transformation. GAO-06-234T. Washington, D.C.: November 9, 2005. 21st Century Challenges: Transforming Government to Meet Current and Emerging Challenges. GAO-05-830T. Washington, D.C.: July 13, 2005. DOD Business Transformation: Sustained Leadership Needed to Address Long-standing Financial and Business Management Problems. GAO-05-723T. Washington, D.C.: June 8, 2005. Defense Management: Key Elements Needed to Successfully Transform DOD Business Operations. GAO-05-629T. Washington, D.C.: April 28, 2005.
In 2005, GAO added the Department of Defense's (DOD) approach to business transformation to its high-risk list because (1) DOD's improvement efforts were fragmented, (2) DOD lacked an integrated and enterprisewide business transformation plan, and (3) DOD had not designated a senior official at the right level with the right authority to be responsible for overall business transformation efforts. This report assesses (1) the progress DOD has made in setting up a management framework for overall business transformation efforts and (2) the challenges DOD faces in maintaining and ensuring the success of those efforts. GAO conducted this work under the Comptroller General's authority to conduct evaluations under his own initiative. In conducting its work, GAO compared DOD's actions to key practices of successful transformations. Although DOD has made progress toward establishing a management framework for overall business transformation, the framework currently focuses on business systems modernization and does not fully address broader business transformation efforts. In 2005, DOD set up the Defense Business Systems Management Committee to review and approve the business enterprise architecture--a transformation blueprint--and new business systems modernization investments. It also established the Business Transformation Agency, which currently reports to the Vice Chair of the Defense Business Systems Management Committee, to coordinate and lead business transformation across the department. Despite these steps, DOD has not clearly defined or institutionalized interrelationships, roles and responsibilities, or accountability for establishing a management framework for overall business transformation. For example, differences of opinion exist within DOD about the roles of various senior leadership committees. Until DOD's business transformation management framework is institutionalized and encompasses broad responsibilities for all aspects of business transformation, it will be challenging for DOD to integrate related initiatives into a sustainable, enterprisewide approach to successfully resolve weaknesses in business operations that GAO has shown are at high risk of waste, fraud, and abuse. DOD also must overcome two critical challenges, among several others, if it is to maintain and ensure success. Specifically, DOD does not have (1) a comprehensive, integrated, and enterprisewide plan or set of linked plans, supported by a planning process that sets a strategic direction for overall business transformation efforts, prioritizes initiatives and resources, and monitors progress, and (2) a full-time leadership position at the right level dedicated solely to the planning, integration, and execution of overall business transformation efforts. A broad-based consensus exists among GAO and others, including the Institute for Defense Analyses and the Defense Business Board, that the status quo is unacceptable and that DOD needs a CMO to provide leadership over business transformation efforts. In a May 2007 letter to Congress, however, DOD stated its view that a separate position is not needed as the Deputy Secretary of Defense can fulfill the chief management officer (CMO) role. Although the Deputy Secretary may be at the right level with appropriate authority to transform business operations, the demands placed on this position make it difficult for the Deputy Secretary to focus solely on business transformation--nor does the position have the necessary term of appointment to sustain progress across administrations. Further, DOD plans to leave the assignment of the CMO role to the discretion of the Secretary of Defense. In GAO's view, codifying the CMO position in statute as a separate, full-time position at the right level with an extended term is necessary to provide sustained leadership, further DOD's progress, and address challenges the department continues to face in its business transformation efforts.
The National Defense Authorization Act for Fiscal Year 2013 required that DOD develop a detailed implementation plan for carrying out its health care system reform of creating the DHA, and provide the plan to the congressional defense committees in three separate submissions in fiscal year 2013. In October 2013, DOD established the DHA to assume management responsibility for numerous functions of its medical health care system, support the services in carrying out their medical missions, manage the military’s health plan, oversee the medical operations within the National Capital Region, and provide 10 shared services, including oversight of medical education and training. According to DOD, a “shared services concept” is a combination of common services performed across the medical community with the goal of achieving cost savings. The DHA’s Education and Training Directorate, a shared service, is scheduled to begin operations in August 2014 and, according to DOD officials, when operational, will constitute the first instance of oversight of medical education and training at the Office of the Secretary of Defense level. While the services establish training requirements, operate their own service-specific training institutions, and provide manpower to conduct the training at tri-service institutions, such as METC, the Directorate plans to provide administrative support; academic review and policy oversight; and professional development, sustainment, and program management to the military departments’ medical services, the combatant commands, and the Joint Staff. See figure 1 below for the organizational chart of the DHA. Medical personnel receive training throughout their careers to develop and enhance their skills. Examples of the types of medical training they can receive include 1. initial training for enlisted servicemembers, which results in a new 2. sustainment training for enlisted servicemembers, which does not result in a new occupational classification but refreshes or augments initial training; 3. operational or readiness skills training, which provides training to perform in operational situations throughout the world and includes such training as burn and trauma care as well as emergency and Chemical, Biological, Radiological, Nuclear, and Explosive preparedness; and 4. executive skills training for enlisted servicemembers, officers, and civilians, which provides military health care leaders with executive management and professional administrative skills. These training courses can be presented in shared or service-specific settings that involve varying degrees of a consolidated approach to course curricula, faculty instruction, equipment, and facilities. Figure 2 depicts the locations of this training and whether it is shared (“tri-service”) or service-specific training. Four DOD institutions offer medical training to servicemembers from all three services. These institutions vary in size and subject matter, and include the following: Uniformed Services University of the Health Sciences (USUHS): DOD-funded medical school in Bethesda, Maryland, with a fiscal year 2015 budget estimate of about $146 million. This university provides medical training to health professionals dedicated to a career as a physician, dentist, or nurse in DOD or the U.S. Public Health Service. Medical Education and Training Campus (METC): Provides initial skills training to most medical enlisted servicemembers in about 50 areas such as pharmacy, laboratory, and dental technology; combat medics, basic hospital corpsmen, basic medical technicians; and a number of advanced medical training courses. METC resulted from a 2005 BRAC recommendation to establish a medical education and training complex that collocated medical enlisted training being conducted at five different locations by each of the military services into one location at Fort Sam Houston, Texas. (See fig. 3.) Since first becoming operational in 2010, METC has created 14 new consolidated courses while 22 of its courses were consolidated prior to METC’s creation. METC trains, on average, about 20,000 students annually and is estimated to cost almost $27 million in fiscal year 2015. See appendix I for a list of courses taught at METC and course participants. Defense Medical Readiness Training Institute (DMRTI): Tri-service organization that is staffed by servicemembers from the Army, the Navy, and the Air Force as well as Department of the Army civilians and according to officials, had a $1.4 million budget in fiscal year 2013. This organization offers resident and nonresident joint medical readiness training courses as well as professional medical programs that enable military medical personnel, both active duty and reserve, to better perform a wide range of medical and health support missions they face throughout the world. Courses include trauma care, burn care, public health emergency preparedness, humanitarian assistance, and emergency response to chemical, biological, nuclear, and other events. During fiscal year 2013, approximately 3,600 students participated in 122 course iterations in 51 different locations. According to officials, besides providing medical readiness training to U.S. servicemembers, DMRTI has provided this training to officials in 38 countries at the request of a combatant command. Joint Medical Executive Skills Institute (JMESI): Tri-service organization that provides military health care leaders with executive management skill programs, products, and services that are designed to enhance their performance as managers and leaders in the military healthcare environment. The training JMESI provides centers on the Core Curriculum which is a collection of 35 executive administrative competencies required of a military hospital commander that tri- service senior leaders are responsible for reviewing and updating every 3 years. Each year approximately 200 managers graduate from JMESI’s Healthcare Management Seminar and MHS Capstone Symposium, and nearly 20,000 students participate in its online, distance learning program. In addition to tri-service training, each of the services operates its own education and training entities that provide additional training to their medical servicemembers. The Army and Navy education and training entities are constituent commands of the Army Medical Command and the Bureau of Medicine and Surgery, respectively, which are headed by Surgeons General. The Air Force education and training entities conduct a wide variety of training, including nonmedical training, and do not report directly to the Air Force Surgeon General. These organizations include the following: Army Medical Department Center and School (AMEDD C&S): Army training headquarters located at Fort Sam Houston, Texas. The center formulates the Army Medical Department’s medical organization, tactics, doctrine, and equipment. The school educates and trains Army medical personnel. More specifically, the Academy of Health Sciences is the “school” and is part vocational institution, part community college, and part major university. The Academy of Health Sciences includes 361 programs of instructions, with 41 of them taught at METC; 2 levels of officer leader development programs; 6 Masters Degree programs; 7 Doctoral Degree programs; 94 professional postgraduate programs; as well as pre-deployment training within three main centers and a graduate school. First, the Center for Health Education and Training consists of 10 departments whose primary mission is to instruct advanced or specialty courses enhancing and building upon the initial training that enlisted soldiers receive from METC and officers receive after finishing their basic courses. Second, the Center for Pre-Deployment Medicine analyzes, designs, and develops individual pre-deployment training courses and products and provides professional expertise and pre-deployment training to increase the technical and tactical abilities of physicians, nurses, and other healthcare professionals. Third, the Leader Training Center provides professional education, doctrinal, and individual leadership training to execute Army missions across a full spectrum of military operations. Additionally, aviation medicine classes are taught at the US Army School of Aviation Medicine, in Fort Rucker, Alabama, and forward surgical teams preparing for overseas deployment go through training at the Army Trauma Training Center in Miami, Florida. Navy Medicine Education and Training Command (NMETC): Consists of four centers that provide education, training, and support for Navy medical personnel. The first center is the Navy Medicine Professional Development Center headquartered in Bethesda, Maryland, which offers educational programs such as the Naval Postgraduate Dental School as well as leadership and specialty courses that focus on the practice and business of military medicine in both the operational and hospital settings delivered via in-person classes and online. The second center is the Navy Medicine Training Support Center headquartered in San Antonio, Texas. It serves as the Navy’s component command for METC students and instructors to provide administrative and operational control of Navy personnel assigned to METC. The third center is the Navy Medicine Operational Training Center, which is headquartered in Pensacola, Florida, and consists of six detachments and nine training centers at 14 locations throughout the country that teach such areas of Navy medicine as undersea, aviation, expeditionary, special operations, and survival training. Fourth, another section of the NMETC provides medical education and training to the reserve components. Air Force: There is no specific Air Force organization focused exclusively on medical training. The Air Force Surgeon General assists Air Force leadership in developing policies, plans, and programs, establishing requirements, and providing resources to the Air Force Medical Service, while the Air Force’s Air Education and Training Command (AETC) and the Air Force Material Command (AFMC) provide medical training. AETC, which is headquartered at Joint Base San Antonio—Randolph, Texas, oversees a wide variety of medical and nonmedical training. AETC is responsible for 114 medical-related courses: 35 initial skills courses conducted mostly at METC; 73 sustainment or skills progression courses conducted at METC and other various locations; and 6 medical readiness courses taught at a military training site near San Antonio, Texas. AFMC, which is headquartered at Wright-Patterson Air Force Base, Ohio, includes the Air Force School of Aerospace Medicine (USAFSAM). USAFSAM is a center for aerospace medical education and training, and offers a series of courses comprising the initial qualification training for flight surgeons, including hyperbaric medicine, occupational medicine, aviation mishap prevention, and other unique aeromedical issues pertinent to the flight environment. The school trains 6,000 students annually. DOD has outlined the areas of responsibility for its Education and Training Directorate, including consolidation and management of a number of activities currently performed by the services. However, in its plans, DOD has not demonstrated through a fully developed business case analysis how creating a shared service for education and training will result in cost savings. According to DOD’s third submission to Congress on its plans for the implementation of the DHA in October 2013, DOD proposed a number of projects or “product lines” for its shared service Education and Training Directorate. Specifically, DOD identified three product lines for the directorate, which involve (1) management of professional development, sustainment, and related programs, including the METC, the Defense Medical Readiness and Training Institute, and the Joint Medical Executive Skills Institute; (2) academic review and policy oversight functions, including management of online courses and modeling and simulation programs; and (3) management of academic and administrative support functions, such as training and conference approval processes. According to DOD’s second submission to Congress, the overall purpose and core measure of success for all shared services is the achievement of cost savings. This focus differentiates the objective of establishing shared services from the six other objectives outlined in DOD’s plans for the implementation of the DHA. However, in its plans, DOD has not demonstrated how its Education and Training Directorate projects will result in cost savings through a fully developed business case analysis, including an analysis of benefits, costs, and risks. In its third submission to Congress on its implementation plans for DHA, DOD presented estimates of costs and cost savings for two “sub-product lines” concerning modeling and simulation and online learning. However, these projects do not represent the core of the directorate’s mission, but rather a portion of the academic review and policy oversight project. Further, these projects overlap with DHA’s contracting and information technology shared services. Specifically, while cost savings for modeling and simulation are allocated to the Education and Training Directorate, implementation costs are to be incurred by the DHA contracting shared service. In addition, the savings for the online learning project are found within the DHA information technology shared service portfolio. Aside from these projects, DOD did not present information concerning the cost savings of its other shared service projects within the Education and Training Directorate. GAO’s Business Process Reengineering Assessment Guide states that a business case begins with (1) measuring performance and identifying problems in meeting mission goals, which is then addressed through (2) the development and selection of a new process. As noted above, the primary stated purpose of the DHA’s shared service projects is to achieve cost savings. The Guide further states that as a project matures, the business case should be enlarged and updated to present a full picture of the benefits, costs, and risks involved in moving to a new process. Such analysis is to provide a sound basis to proceed with the reengineering process. DOD’s own process for developing its shared services, outlined in its second submission on implementation of the DHA, states that after an assessment of the current state of performance and measures of effectiveness have been identified, performance improvement and cost reduction opportunities should be identified. It also states that new processes and initiatives are to be developed to address these challenges, along with associated implementation costs. Further, the National Defense Authorization Act for Fiscal Year 2013 required DOD to develop business case analyses for its shared service proposals as part of its submissions on its plans for the implementation of the DHA, including, among other things, the purpose of the shared service and the anticipated cost savings. DOD does not have a fully developed business case analysis for medical education and training because it has not yet completed the first step of that analysis, which is to identify specific problems, which, given the stated purpose of shared services, should be directed toward the achievement of cost savings. Several of DOD’s other shared service projects present a clear linkage between (1) a stated problem, (2) proposed process changes, and (3) an estimate of benefits, costs, and risks. For example, DOD’s third submission on the implementation of DHA, states that the pharmacy shared service will address rising costs due to variation in drug purchasing, staffing, and formulary management (the problem) through the introduction of MHS-wide standards and business rules (the new processes), which will result in cost savings. Similarly, the plan states that the contracting shared services will address rising costs due to fragmentation in its acquisition strategy (the problem) through a common approach to acquisition planning, program management, contract execution, management, and administration (the new processes). In contrast, DOD listed the new processes the Directorate will employ, but it did not explain the problem its proposed new processes will address, and how they will achieve cost savings. DOD officials stated that they believe that a central problem for the Directorate to address is unnecessary variation of practice between the services, and they believe that efficiencies could be generated through the consolidation of training. However, in its official plans for the Directorate, DOD has not identified this issue or any other challenge related to cost savings as the problem its shared service will address. DOD also lacks the information to assess its current performance to then identify a problem. Specifically, DOD officials stated that they lack data on the cost of DOD’s education programs and potential redundancy within its portfolio of courses, which would allow them to identify a problem and develop processes to address these challenges. In fact, officials stated they have identified the need for developing a baseline of current medical education and training courses and associated spending as a goal for the Directorate, and therefore have acknowledged the lack of such information. In addition, some officials cast doubt on the potential cost savings that could be achieved. Several DOD officials told us that the creation of the Directorate represents a logical step in the course of further cooperation among the services in the area of medical training. However, senior service officials stated that the Directorate was unlikely to achieve significant savings and that its creation serves more as a functional realignment than a cost savings endeavor. For example, officials stated that the Directorate provides an opportunity to assign a parent agency to METC, JMESI, and DMRTI, which they described as “orphan” agencies that lack a parent organization. Officials made similar comments during our 2012 review, in which we found that DOD was not able to demonstrate potential financial savings from the creation of METC, but agency officials stated at the time that they believed combining several training sites into the formation of METC had saved money and that other efficiencies had been achieved. GAO-14-49. particular, given that DOD continues to lack an understanding of how the establishment of the DHA will affect staff levels, its challenges in identifying cost savings and a clear mission for its education reforms could result in increases in staff levels without any savings. As we noted in our reviews of DOD’s plans for the implementation of the DHA, DOD’s submissions did not include critical information necessary to help ensure that DOD achieves the goals of its reform of the MHS. Accordingly, in a recent report, the House Committee on Armed Services has expressed concern regarding DHA’s staffing requirements, cost estimates, performance metrics, and medical education and training shared service. Without a business case analysis that links (1) a stated problem, (2) proposed process changes, and (3) an estimate of benefits, costs, and risks, the role of the Directorate remains ambiguous, and it is unclear how DOD will measure its accomplishments and hold the Directorate accountable for achieving cost savings by sharing training and education services. Without such information, the Directorate also potentially risks increasing staff levels without achieving any cost savings. DOD established METC as part of the 2005 BRAC process to provide interservice training for enlisted service members and to achieve cost savings. However, DOD is unable to determine whether the consolidation of medical education and training for enlisted personnel at METC has resulted in cost savings because it did not establish a baseline for spending on education and training prior to METC’s establishment. METC has designed processes to assess the effectiveness of its training and is taking action to improve them. DOD cannot demonstrate whether the consolidation of training at METC has resulted in cost savings. However, officials stated that while they could not document cost savings, they believe that the consolidation of training at METC has led to cost savings because of (1) increased equipment sharing; (2) personnel reductions; and (3) cost avoidances, such as those associated with the closure of medical education facilities that were service-specific. In contrast, officials also identified areas where the consolidation of training at METC may have resulted in cost increases because of, for example, (1) the construction of new facilities; (2) relocation of students to METC; and (3) replacement of personnel within their organizations who had been transferred to METC. To fund training at METC, the services transferred funding to a single METC budget managed by the Air Force over 3 years from fiscal year 2010 through fiscal year 2012. The services continue to fund compensation for military instructors at METC. Civilian funding was transferred to the Air Force, and officials told us that this funding is likely to be transferred to the DHA. When METC was established, the services transferred funding for their enlisted medical programs being consolidated at METC into a single METC budget. However, some officials stated they are unsure whether the services’ transfers were representative of their true costs for the transferred programs prior to the creation of METC. Additionally, the funding transfers from the services were not sufficient to fund training at METC, and the Office of the Assistant Secretary of Defense for Health Affairs provided additional funding to cover this shortfall. For instance, of the total METC budget of $26.6 million in fiscal year 2012, Health Affairs provided 28 percent; the Air Force, 22 percent; the Army, 36 percent; and the Navy, 14 percent. Table 1 shows the funding amounts transferred by each service to fund METC, from fiscal year 2010, the first year in which the services transferred funds, until fiscal year 2012, when the services completed a permanent transfer of their funds to METC. GAO’s Business Process Reengineering Assessment Guide states that performance measures are a critical part of a comprehensive implementation process to ensure that a new process is achieving the desired results. Additionally, through our prior work on performance metrics, we have identified several important attributes of these assessment tools, including the need to develop a baseline and trend data to identify, monitor, and report changes in performance and to help ensure that performance is viewed in context. By tracking and developing a performance baseline for all measures, agencies can better evaluate progress made and whether goals are being achieved, such as cost savings targets. DOD did not establish and monitor baseline cost information as part of its metrics to assess performance to ensure that the establishment of METC provided costs savings. Officials told us that their focus in establishing METC was to ensure that DOD met the BRAC recommendation to co- locate enlisted medical training, not to ensure that this consolidation led to cost savings. However, the METC business plan, developed in response to the BRAC recommendation, noted that the intent of establishing METC was to reduce costs while leveraging best practice training programs of the three services. We found in April 2012 that DOD was unable to provide documented savings associated with the establishment of METC. We recommended that DOD employ key management practices in order to show the financial and nonfinancial outcomes of its reform efforts, and DOD concurred with our recommendation. DOD noted that it would employ key management practices in order to identify those outcomes; however, as of June 2014, DOD officials have not documented the financial outcome of the establishment of METC. DOD justified its request for the 2005 BRAC round in part based on anticipated savings. For example, DOD submitted to the 2005 BRAC Commission a recommendation for the consolidation of 26 military installations operated by individual military services into 12 joint bases to take advantage of opportunities for efficiencies arising from such consolidation and elimination of similar support services on bases located close to one another. However, we found in 2012 that DOD did not have a plan for achieving cost savings. For example, during our review of DOD’s effort to implement this BRAC recommendation, joint base officials provided us with anecdotal examples of efficiencies that had been achieved at joint bases, but it was unclear whether DOD had achieved any significant cost savings to date, due in part to weaknesses in such areas as DOD’s approach to tracking costs and estimated savings. Specifically, it did not establish quantifiable and measurable implementation goals for how to achieve cost savings or efficiencies through joint basing. We recommended that DOD develop and implement a plan that provides measurable goals linked to achieving savings and efficiencies at the joint bases and provide guidance to the joint bases that directs them to identify opportunities for cost savings and efficiencies. DOD did not concur with our recommendation, and we noted that this position contradicts DOD’s position that joint basing would realize cost savings. Similarly, the co-location and consolidation of training at METC was, in part, premised on the achievement of cost savings, but DOD did not establish baseline costs as part of its metrics for assessing performance. It is now likely not possible to develop baseline cost information for fiscal year 2009 to determine the extent to which the establishment of METC resulted in cost savings. However, without developing baseline cost information before undergoing future course consolidation of training at METC and within the Education and Training Directorate, DOD will be unable to accurately assess cost savings in the future. METC has designed quality assurance processes to provide continuous, evaluative feedback related to improvements in education and training support, and is taking action to address issues regarding course accreditation and the post-graduation survey process. Certification Rates: METC monitors the national certification exam pass rates of its students, both to meet national requirements and to make comparisons with national averages. According to METC officials, certification rates are generally higher since the consolidation of training at METC. Currently, certification rates for seven programs exceed the national average. Internal Metrics: According to METC officials, METC regularly monitors a number of internal metrics, such as attrition, course repetition, and graduation rates. To manage performance information for all of their courses, officials produce a monthly snapshot of these data to track trends in performance over time. Additionally, all of METC’s courses are to be reviewed through a comprehensive program review process conducted by the Health Care Interservice Training Office.ensure, for example, that all service and accreditation requirements are met; that faculty meet all required qualifications; and that internal and external surveys are conducted, analyzed, reported, and acted on according to policy. This office is to review 30 specific standards to help Accreditation Standards: METC is institutionally accredited by the Council on Occupational Education and is officially an affiliated school within the Community College of the Air Force (CCAF). Most METC courses are accredited by a relevant external accrediting body, such as the American Council on Education (ACE) or the CCAF. Surveys: The METC Memorandum of Agreement states that METC and the services will conduct external evaluations to document program efficacy and to facilitate curriculum review, by gathering feedback to measure whether the training received was relevant and to determine whether the graduates are proficient in their job duties. METC solicits this feedback through surveys sent by the services to the supervisors of METC graduates at the gaining commands to gauge satisfaction with the training they received at METC. These surveys ask such questions as whether the graduates have the cognitive skills necessary to do their jobs, whether they have met the entry-level practice requirements of their organizations, and whether any job tasks should be added to the METC curriculum for their programs of study. METC officials told us that some training courses were awarded fewer recommended credits by the ACE than similar service-run courses had received prior to METC’s consolidation. Officials also stated that the consolidation of service-run curricula into single programs at METC was conducted by a contractor, and that these consolidated curricula could be improved. METC officials further noted that the ACE review of METC’s consolidated curricula occurred after a change to that body’s process for recommending credits, and that they are unaware whether the decrease in the number of recommended credits was due to the consolidated curricula or changes to ACE’s process. METC officials told us that they are attempting to improve their programs through their regular process of curriculum review ahead of future ACE reviews of recommended credits for their courses. METC officials also told us that the post-graduate survey process has been ongoing since before METC was established; however, these surveys have historically exhibited low response rates. For instance, one sample survey provided by METC officials had a 14 percent student response rate and a 0 percent supervisor response rate. To improve the level of feedback received from these surveys, METC officials have begun a pilot process to conduct their own post-graduation surveys, using an online survey program that can be sent directly to the students’ and supervisors’ personal email addresses. Depending on the success of the pilot, METC officials plan to extend the process throughout all of METC. DHA’s Education and Training Directorate is scheduled to begin operations in August 2014 to oversee medical education and training reform, but DOD does not have key information necessary to assess its progress in realizing the reform effort’s goal of achieving cost savings. When DOD responded to the 2005 BRAC recommendation to relocate some medical education and training programs for enlisted servicemembers at METC, DOD similarly did not have key information necessary to determine whether the consolidation of training there had resulted in cost savings. Although DOD’s plans for the implementation of the DHA acknowledge the benefits of conducting business case analyses, it has not done so for its medical education and training reforms. DOD’s inability to demonstrate that cost savings had resulted from the consolidation of training at METC risks being repeated on a larger scale in the reform effort of the DHA’s Education and Training Directorate. Specifically, absent analysis demonstrating how the Directorate’s efforts will result in cost savings, the creation of the Directorate could increase costs by increasing staff levels without achieving any cost savings. In addition, without baseline cost information prior to future course consolidation of training at METC and within the Education and Training Directorate, DOD will be unable to assess potential cost savings. The risk of cost growth also exists for any future consolidations of training at METC, which could require significant investment of time and resources without any long-term efficiencies. To help realize the reform effort’s goal of achieving cost savings, we recommend that the Assistant Secretary of Defense for Health Affairs direct the Director of the DHA to conduct a fully developed business case analysis for the Education and Training Directorate’s reform effort. In this analysis the Director should identify the cost-related problem that it seeks to address by establishing the Education and Training Directorate, explain how the processes it has identified will address the cost- related problem, and conduct and document an analysis of benefits, costs, and risks. To help ensure that DOD has the necessary information to determine the extent to which cost savings result from any future consolidation of training within METC or the Education and Training Directorate, we recommend that Assistant Secretary of Defense for Health Affairs direct the Director of the DHA to develop baseline cost information as part of its metrics to assess achievement of cost savings. We provided a draft of this product to DOD for comment. The Acting MHS Chief Human Capital Officer provided DOD’s comments in an email dated July 21, 2014. In that email, DOD concurred with the draft report's findings, conclusions, and recommendations. Additionally, noted in the email was that Medical Education and Training is the only shared service that has never had any type of oversight by the Office of the Assistant Secretary of Defense for Health Affairs or the pre-DHA TRICARE Management Activity. Further, in that email, DOD noted that that much credit goes to the sub-working group which has worked numerous hours over the past 2 years to put this shared service together so the MHS can realize efficiencies and garner maximum value, exploit best practices from the services, and achieve standardization where it makes sense. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Assistant Secretary of Defense for Health Affairs; the Director, DHA; and the Surgeons General of the Army, the Navy, and the Air Force. In addition, the report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are included in appendix II. The Medical Education and Training Campus (METC) is the result of the 2005 Base Realignment and Closure (BRAC) Commission legislation that required the bulk of enlisted medical training in the Army, Air Force, and the Navy to be co-located at Fort Sam Houston, Texas. As a result, four major learning institutions for Navy and Air Force relocated to Fort Sam Houston, where the Army was already training its enlisted medical force under the Army Medical Department Center & School’s (AMEDD C&S) Academy of Health Sciences. The Naval School of Health Sciences in San Diego, California; Naval School of Health Sciences in Portsmouth, Virginia; Navy Hospital Corps School in Great Lakes, Illinois; and the 882nd Training Group (now the 937th Training Group) at Sheppard Air Force Base moved to Fort Sam Houston, Texas. METC is now the largest military medical education and training facility in the world. METC started operating on June 30, 2010. Its initial training course was radiography specialist. Other courses were phased in throughout the rest of the year and into 2011. METC became fully operational on September 15, 2011. The longest program offered is cytology, which is the study of cells, at 52 weeks; and the shortest, at 4 weeks, is patient administration. METC offers about 50 medical training programs, which are listed in table 2 along with the course participants. In addition to the contact named above, Lori Atkinson, Assistant Director; Rebecca Beale; Jeffrey Heit; Mae Jones; Carol Petersen; Michael Silver; Adam Smith; and Sabrina Streagle made key contributions to this report. Military Health System: Sustained Senior Leadership Needed to Fully Develop Plans for Achieving Cost Savings. GAO-14-396T. Washington, D.C.: February 26, 2014. Defense Health Care Reform: Additional Implementation Details Would Increase Transparency of DOD’s Plans and Enhance Accountability. GAO-14-49. Washington, D.C.: November 6, 2013. Defense Health Care: Additional Analysis of Costs and Benefits of Potential Governance Structures Is Needed. GAO-12-911. Washington, D.C.: September 26, 2012. Defense Health Care: Applying Key Management Practices Should Help Achieve Efficiencies within the Military Health System. GAO-12-224. Washington, D.C.: April 12, 2012. 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-342SP. Washington, D.C.: February 28, 2012. Follow-up on 2011 Report, Status of Actions Taken to Reduce Duplication, Overlap, and Fragmentation, Save Tax Dollars, and Enhance Revenue. GAO-12-453SP. Washington, D.C.: February 28, 2012. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. Military Personnel: Enhanced Collaboration and Process Improvements Needed for Determining Military Treatment Facility Medical Personnel Requirements. GAO-10-696. Washington, D.C.: July 29, 2010. Defense Health Care: DOD Needs to Address the Expected Benefits, Costs, and Risks for Its Newly Approved Medical Command Structure. GAO-08-122. Washington, D.C.: October 12, 2007.
To help address DOD's escalating health care costs, in 2013 DOD established the DHA to, among other things, combine common medical services such as medical education and training. DOD trains its servicemembers for a wide variety of medical positions, such as physicians, nurses, therapists, and pharmacists. DHA's Education and Training Directorate is to oversee many aspects of DOD's medical education and training and is now expected to begin operations in August 2014. GAO was mandated to review DOD's efforts to consolidate medical education and training. GAO examined the extent to which DOD has (1) conducted analysis to reform medical education and training to achieve cost savings and (2) determined whether the consolidation of training at METC has resulted in cost savings and designed processes to assess its effectiveness. GAO compared DHA implementation plans and METC budget information from fiscal years 2010 through 2012 with best practices and interviewed officials from the DHA, METC, and military services' Surgeons General offices. In its 2013 plans for the implementation of the Defense Health Agency (DHA), the Department of Defense (DOD) outlined the responsibilities of a new Education and Training Directorate, but has not demonstrated how its proposed reforms will result in cost savings. The National Defense Authorization Act for Fiscal Year 2013 required DOD to develop business case analyses for its shared service proposals as part of its submissions on its plans for the implementation of DHA, including, among other things, the purpose of the shared service and the anticipated cost savings. Although DOD has stated that the Directorate is a shared service that combines common services and that it will result in cost savings, DOD has not fully developed the required business case analysis for the medical education and training reforms. This is because DOD has not yet completed the first step of the process, which includes identifying the specific problems that the reform is intended to address and thereby achieve cost savings. Unlike the medical education and training reforms, other DOD shared service projects present a clear linkage between (1) a stated problem, (2) proposed process changes, and (3) an estimate of benefits, costs, and risks. For the Directorate, DOD has identified the new processes it will employ, but has not identified the concerns the proposed new processes are intended to address and how they will achieve cost savings. In addition, some officials are unconvinced that the potential cost savings will be achieved, and stated that the creation of the Directorate serves more as a functional realignment than a cost savings endeavor. Without a fully developed business case analysis, it is unclear how DOD will measure any accomplishments and hold the Directorate accountable for achieving cost savings. DOD is unable to determine whether the consolidation of training at the Medical Education and Training Campus (METC) resulted in cost savings; however, DOD is taking action to improve some of the processes for evaluating the effectiveness of training at METC. DOD co-located medical training for enlisted medical servicemembers at METC as part of the 2005 Base Realignment and Closure Commission (BRAC) process to achieve cost savings, and subsequently, the services decided to consolidate their training. However, some officials stated they were unsure whether all funds were transferred to METC. Furthermore, due to a shortage of military service funds, the Office of the Assistant Secretary of Defense for Health Affairs provided funding for METC in addition to the services' transfers. DOD is unable to determine whether the consolidation of training at METC resulted in cost savings because it did not develop baseline cost information as part of its metrics to assess METC's success. Baseline cost information is a key characteristic of performance metrics critical to ensuring that processes achieve the desired results. Without baseline cost information prior to future course consolidation of training at METC and within the Education and Training Directorate, DOD will be unable to assess potential cost savings. DOD has designed processes to evaluate the quality of training at METC—including processes related to certification rates, accreditation, and surveys. Further, DOD has taken action to improve some processes. For example, to improve the level of feedback received from METC surveys, METC officials have begun a pilot process to conduct their own post-graduation surveys. GAO recommends that DOD conduct a fully developed business case analysis for the Education and Training Directorate and develop baseline cost information as part of its metrics to assess cost savings for future consolidation efforts. In comments to a draft of this report, DOD concurred with each of GAO's recommendations.
Federal agencies can use a variety of different approaches to purchase office supplies. For relatively small purchases, generally up to $3,000, authorized users can use their government purchase cards. For larger purchases, agencies may use other procedures under the Federal Acquisition Regulation, such as awarding a contract. Alternatively, GSA provides federal agencies with a simplified method for procuring office supplies through its Federal Supply Schedule program, also known as the Multiple Award Schedules (MAS) or schedules program. Under the schedules program, the federal government’s largest interagency contracting program, GSA awards contracts to multiple vendors for a wide range of commercially available goods and services. The schedules program can leverage the government’s significant aggregate buying power. Also, under the schedules program, to ensure the government is getting the most value for the taxpayer’s dollar, GSA seeks to obtain price discounts equal to those that vendors offer their “most favored customers.” In November 2007, GSA initiated another approach for buying office supplies by creating blanket purchase agreements (BPA) under the schedules program. BPAs are a simplified method of fulfilling repetitive needs for supplies and services that also provide an opportunity to seek reduced pricing from vendors’ schedule prices. The approach was part of the government’s Federal Strategic Sourcing Initiative (FSSI). GSA officials acknowledged they could have done a better job promoting this initiative. Ultimately, GSA determined that the initiative did not meet its expectations and initiated a second strategic sourcing initiative known as FSSI Office Supplies II (OS II) in 2010. By July 2010, GSA competitively awarded 15 BPAs to 13 small businesses and 2 other businesses to support the OS II initiative. The GSA study on office supply purchases reviewed 14 categories of mostly consumable office supplies, ranging from paper and writing instruments to calendars and filing supplies. The report did not include non-consumable items such as office furniture and computers because they are not part of the standard industry definition of office supplies. The GSA report estimated that the 10 agencies with the highest spending on office supplies accounted for about $1.3 billion, about 81 percent of the total $1.6 billion spent governmentwide on the 14 categories of office supplies during fiscal year 2009. The amounts spent by the top 10 agencies are shown in figure 1. The report found that about 58 percent of office supply purchases were made outside of the GSA schedules program, mostly at retail stores. The report also found that agencies often paid more—a price premium—than they would have by using the GSA schedules program or OS II. On average, GSA found that agencies paid 75 percent more than schedule prices and 86 percent more than OS II prices for their retail purchases. Table 1 shows the 14 categories of office supplies, the number of different items in each of the categories, and the retail price premiums that GSA calculated for each category when compared to schedule prices. The report also concluded that buyers engaged in at least some level of price comparisons before making purchasing decisions. More specifically, the report stated that buyers may compare prices across different vendors when ordering through an electronic medium, or across available items when purchasing directly through a vendor’s online or retail store. GSA used several sources of data to analyze and compare the prices paid for 219 items across 14 categories of office supplies through various purchasing options. The GSA report acknowledged some limitations with the data, but we identified additional data and other limitations that lead us to question the magnitude of some of GSA’s reported price premiums. We were not able to fully quantify the impact of these limitations. Other agencies also questioned the study’s specific findings related to price premiums, but their own studies of price premiums support GSA’s conclusion that better prices can be obtained through consolidated, leveraged purchasing. The GSA study also concluded that buyers compared prices before making purchases, but this conclusion was not based on information from actual purchase card holders. Purchasing of office supplies is highly decentralized with about 270,000 purchase cardholders and others across the government making purchases in fiscal year 2009. Because of this, GSA obtained data for its study from multiple government sources, and purchase card information provided by the commercial banks that issue the government purchase cards. To determine the funds spent on office supplies and to conduct related analyses, GSA sorted through data from these various sources, which included about 7 million purchase transactions involving over 12 million items. GSA took a number of steps to clean the data prior to using them. For example, because a single purchase might have been reported in more than one data source, GSA removed duplicate purchases prior to its analysis. The data were further cleaned to remove items and their related costs that did not meet GSA’s definition of office supplies. To determine retail price premiums, GSA focused its analyses on 219 office supply items that were purchased in 2009 from retailers and the GSA schedules. In its report, GSA acknowledged that the data used to analyze governmentwide purchases of office supplies in 2009 had limitations, in part due to the decentralized data sources for office supply purchases and the limited time GSA had to conduct its study. A significant issue GSA faced was attempting to control for variation in quantities; in other words, GSA tried to ensure that when comparing prices, it was using transactions that involved identical quantities. A purchase of pens, for example, could involve a single pen, a package of three pens, a box of a dozen, or any other quantity. GSA officials told us that the primary means they used to control for quantities was the use of the manufacturer’s part number. They explained that they searched available databases to identify items with identical part numbers. They told us that when they found large variations in retail prices for apparently identical items, they excluded transactions they considered to be outliers. This approach, however, may not have been adequate to account for variations in quantity. When we contacted a national organization representing manufacturers, a senior staff director told us that there is no consistent approach among manufacturers for assigning part numbers. Some manufacturers may assign one part number to individual items and different part numbers to packages of those same items containing different quantities, while other manufacturers may assign the same part number both to individual items and to packages of items. In addition, when we reviewed some of the individual transaction data GSA obtained for retail purchases, we identified substantial price variations for a number of drawing and graphic arts supplies and writing instruments that carried the same manufacturer’s part number. Specifically, when we reviewed GSA’s retail transaction data for 10 items within the writing instruments category, we found that retail prices for 6 of the 10 items varied by more than 300 percent. For instance, for one item involving black Rollerball pens, GSA’s retail transaction data showed prices ranging from $9.96 to $44.96 for items listed with the same part number. These transactions were all with the same nationwide retailer. When asked about such substantial price differences for items with the same part number, GSA officials acknowledged that the purchase card data they used for retail prices did not always accurately identify the quantity of items involved in each transaction. The existence of substantial price differences for a number of items indicates that GSA’s attempts to compare prices may not have adequately controlled for variations in quantities. We also identified a weakness with the clarity of the GSA report with regard to how price premium estimates were calculated. Specifically, GSA’s study described a specific formula that was used to calculate the price premiums, but our review of the study’s supporting documents found that the GSA actually used a different formula to calculate price premiums for 10 of the 14 office supply categories. In a discussion with GSA officials, they agreed that the study did involve the use of two different formulas. When we used the formula described in the study to recalculate the retail price premiums for those 10 categories of office supplies, we found the price premiums would have changed from what GSA reported by less than 5 percentage points for all categories except drawing and graphic arts supplies. For that category, the recalculated price premium was 68 percent, as compared to the 278 percent reported in the study. The use of this unreported formula did not have a substantial impact on the retail price premium calculations for most categories of office supplies or the overall conclusions of the study, but the GSA report could have been more complete had it fully disclosed all the formulas used for all categories of office supplies. On the basis of their own studies, Air Force, Army, Navy, and DHS officials also questioned the specific price premiums and savings reported by GSA. Officials from these agencies told us they believed that the price premiums reported by GSA when buying outside the GSA schedule were overstated. However, the agencies agreed with GSA’s overall conclusion that better prices can be obtained through leveraged buys. In addition, all four agencies in our review found that the prices available through the new OS II BPAs were better than the prices available from their existing agency BPAs. For example, a DHS study found savings of about 20 percent when analyzing the prices associated with a mix of 348 items. The Air Force determined that the OS II BPAs could save about 7 percent in a study of the 125 most commonly purchased items. On the basis of this analysis, the Air Force decided to let its existing office supply contracts expire. Similarly, the Navy’s comparison of 71 items found that using the OS II BPAs could save about 6 percent, which led Navy officials to move purchasing to the OS II BPAs. Army officials did not provide study results, but they told us their analysis found lower price premiums than reported by GSA. An Army official said they plan to continue using existing BPA's while they transition to OS II. GSA interviewed senior-level acquisition officials to determine how office supply purchasing decisions were made within their respective agencies and concluded that purchase cardholders compared costs at some level prior to making a purchase. While these officials may have had a broad understanding of agency procurement policies and practices based on their positions in their respective agencies, they were not representative of the approximately 270,000 credit cardholders making the purchasing decisions. The GSA report did not identify or collect any data about price comparisons conducted by the cardholders. Collecting information from buyers, even through interviews or a survey of government purchase cardholders who actually made the purchases, could have provided another perspective on buyer behavior, including the extent to which price comparisons were made. GSA officials said that, given the reporting timeframe for the study, they did not have the resources or time that would have been needed to conduct a study that would have included a representative sample of the 270,000 purchase card holders. According to initial available data, GSA’s new OS II BPAs have produced savings. The OS II initiative, more so than past efforts, is demonstrating that leveraged buying can produce greater savings and has provided improvements for managing ongoing and future strategic sourcing initiatives. GSA is using a combination of agency and vendor involvement to identify key requirements and cost drivers, increase the ease of use, and obtain the data necessary to manage the program. For example, a key aspect of the initiative is that participating vendors provide sales and other information to GSA to help monitor prices, savings, and vendor performance. On the basis of the sales data provided by OS II vendors, GSA estimates the federal government saved $16 million from June 2010 through August 2011 by using these BPAs. These savings were estimated by comparing the lowest prices of a set of over 400 items available on GSA’s schedules program contracts before OS II with prices and discounts being offered for the same items on the OS II BPAs. Importantly, and unlike GSA’s report, GSA’s conclusions about savings realized under OS II are based on data from vendors—which they are required to collect and provide in the normal course of business—and not on data collected after the fact from sources not designed to produce information needed to estimate savings. GSA’s comparison of the market basket of best schedule prices against the OS II BPA vendors’ prices found that the BPA vendors offered prices that were an average of 8 percent lower, and the average savings is expected to fluctuate somewhat as the OS II initiative continues to be implemented. The expected fluctuation is based on anticipated changes in the mix of vendors, products, and agencies. For example, GSA found the savings, as a percentage, declined slightly as agencies with historically strong office supplies management programs increased their use of OS II. Conversely, they expect the savings percentage to increase as agencies without strong office supplies management programs increase their use. In addition to the savings from the BPAs, GSA representatives told us that they are also seeing prices decrease on schedules program contracts as vendors that were not selected for the OS II program react to the additional price competition created by the OS II initiative by reducing their schedule prices. After the first year the OS II BPAs were in use, GSA extended the BPAs for an additional year after negotiating additional price discounts. As a result of these discussions, 13 of the 15 BPA vendors decreased their prices by an additional 3.9 percent on average. Additionally, the BPAs included tiered discounts, which apply when specific sales volume thresholds are met. Sales realized by one of the BPA vendors reached the first tier discount level in September 2011, and the vendor has since adjusted its prices to provide the corresponding price discounts. GSA anticipates additional vendor sales to exceed the first tier discount threshold in the first option year, which will trigger additional discounts. GSA expects that OS II will result in lower government-wide costs for office supplies as more agencies move from their agency-specific BPAs for office supplies to the OS II BPAs. Many agencies that had their own BPAs for office supplies did not renew their BPAs and have opted to use the OS II instead. As these agencies move to OS II, their contract management costs should decrease. For example, according to Air Force officials, instead of having personnel in every agency administer their own BPAs for office supplies, personnel at GSA will administer the OS II program on behalf of other agencies. While this may create some additional burden for GSA, officials believe the overall government costs to administer office supply purchases should decrease. GSA has incorporated a range of activities representative of a strategic procurement approach into the OS II initiative, including aspects of managing the suppliers. These activities range from obtaining a better picture of spending on services, to taking an enterprisewide approach, to developing new ways of doing business. All of these activities involve some level of centralized oversight and management. In addition, this approach involves activities associated with the management of the supply chain, which includes planning and managing all activities involved in sourcing and procurement decisions, as well as logistics management activities. These include coordination and collaboration with stakeholders, such as suppliers or vendors, intermediaries sometimes referred to as resellers, third party service providers, and customers or buying agencies. As part of the planning process for OS II, GSA assessed its schedules program office supply vendor pool and determined a sufficient number of vendors could meet its critical requirements. As part of the overall strategy, in addition to savings, GSA through its commodity council also identified five overarching goals for the OS II initiative, to facilitate overall management, as shown in table 2. As part of preparing for the competition for OS II, GSA obtained input from the interested vendors before issuing the request for quotations by holding an industry day. For example, based on vendor input that identified shipping as a key cost driver, a $100 minimum order level was included as part of the BPA. A reverse auction process was used to carry out the competition for the BPAs, which GSA anticipated would result in more pricing discounts offered by vendors. As part of the reverse auction process, the vendors submitted an initial quote. After GSA evaluated the quotes, the vendors were notified of the lowest quotes and provided at least one opportunity to revise their quotes, resulting in price reductions. GSA obtained commitments from agencies and help set goals for additional discounts to let businesses know that the agencies were serious in their commitment to the BPAs. This also helped GSA determine the number of BPAs that would be awarded. Because government purchase cards were the most common way to purchase office supplies, OS II includes a point of sale discount, under which BPA prices are automatically charged whenever a government purchase card is used for an item covered by the BPA rather than having the buyers ask for a discount. Additionally, purchases are automatically tax exempt if the purchases are made using a government purchase card. State sales taxes were identified by GSA’s report as costing the federal agencies at least $7 million dollars in fiscal year 2009. To address concerns about vendor oversight and management, OS II has attempted to clearly define program implementation responsibilities, including laying out GSA, vendor, and buying agency responsibilities. A key aspect of a successful acquisition program is managing the vendors or suppliers to ensure that they are meeting terms and conditions of the contract or BPA and that the program or initiative is meeting its overall goals. This includes defining performance metrics, capturing or collecting data, preparing analysis and related reports, communicating the results of the analysis, and initiating corrective actions. GSA is capturing data on purchases and vendor performance that is assimilated and tracked through dashboards, which are high-level indicators of overall program performance. The dashboard information is used by the GSA team members responsible for oversight and is shared with agencies using OS II. Our review of GSA’s OS II vendor files found that GSA has taken a more active role in oversight and is holding the vendors accountable for performance. For example, GSA has issued Letters of Concern to four vendors and has issued one Cure Notice to a vendor. These letters and notices are used to inform vendors that the agency has identified a problem with the vendor’s compliance with the terms and conditions of the BPA. To support the OS II management responsibilities, GSA charges a 2 percent management fee, which is incorporated into the vendor prices. This fee, which is higher than the .75 percent fee normally charged on GSA schedules program sales, covers the additional program costs, such as the cost of the six officials responsible for administering the 15 BPAs, as well as their contractor support. GSA is learning lessons from OS II, its first of the second generation of strategic sourcing initiatives, and is attempting to incorporate these lessons into other strategic sourcing initiatives. While some of the lessons learned as OS II has progressed are not directly transferable to other initiatives, there are some aspects of it that can be applied to any strategic sourcing initiative. To this end, GSA established an office supplies commodity council to identify agencies’ goals and needs. The input provided by the commodity team was incorporated into all aspects of the program from the vendor requirements to the selection criteria. This experience is being applied to other strategic sourcing initiatives. For example, GSA took a more collaborative approach as it moved to Federal Strategic Sourcing Initiative Second Generation Domestic Delivery Services II (DDS2). More specifically, GSA set up a commodity council that helped identify the program requirements and provide input on how the program operates. Vendor input was also sought and incorporated into the requirements. GSA’s office supplies report contained some data and other limitations, but it showed that federal agencies were not using a consistent approach in both where and how they bought office supplies and often paid a price premium as a result of these practices. The magnitude of the price premium may be debatable, but other agencies that have conducted studies came to the same basic conclusion about the savings potential from leveraged buying. The GSA study helped set the course for a more strategic approach to buying office supplies—an approach that provides data to oversee the performance of vendors, monitor prices, and estimate savings. Additional savings are expected as more government agencies participate in the OS II initiative and further leverage the government’s buying power. We provided a draft of this report to GSA, DHS, and DOD. We received written comments from GSA and DHS, which are included as appendices I and II, respectively. DOD had no comments. In its comments, GSA said it was pleased that our report affirmed that savings can be achieved through leveraged purchasing and better understanding of spend data. GSA also provided additional information on its strategic sourcing initiatives. GSA noted that it would have been very resource intensive for the agency to obtain information from a representative sample of the 270,000 purchase card holders for little added benefit. We revised our report to reflect GSA’s comment. GSA provided some suggested language and technical changes to help clarify the report, which we incorporated as appropriate. We did not use GSA's suggested language concerning the limitations we identified in its study because we believe the language in our report accurately reflects our finding on this issue. DHS stated that it appreciated our work and provided additional information on its respective strategic sourcing initiatives. DHS also stated that it has realized savings from the OS II initiative and expects to continue to do so. We are sending copies of this report to the Administrator of General Services, the Secretaries of the Department of Homeland Security and Defense as well as the Air Force, Army, and Navy. In addition, the report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-4841 or woodsw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix III. In addition to the contact named above, James Fuquay, Assistant Director; Marie Ahearn; Morgan Delaney Ramaker; Joseph Fread; Jean Lee; Jean McSween; Kenneth Patton; Carol Petersen; Raffaele Roffo; William Russell; Roxanna Sun; Jeff Tessin; and Ann Marie Udale made significant contributions to this report.
Concerned that federal agencies may not be getting the best prices available, Congress directed the General Services Administration (GSA) to study office supply purchases by the 10 largest federal agencies. GSA delivered the results of its study in November 2010. The study also discussed GSA’s efforts to implement an initiative focused on leveraging the government’s buying power to realize savings when buying office supplies, known as Office Supplies II (OS II). Under this initiative, GSA entered into agreements with vendors based on discounted prices to be offered to all federal agencies. Congress directed GAO to assess the GSA study, with particular attention to the potential for savings. Accordingly, GAO assessed (1) the support for the findings and conclusions in GSA’s report and (2) how GSA's new office supply contracts support the goal of leveraging the government’s buying power to achieve savings. To conduct this work, GAO analyzed the data GSA used for its study; met with and obtained documentation from officials at GSA and the Departments of Homeland Security (DHS), Air Force, Navy, and Army, which were among the 10 agencies in GSA’s study; and reviewed contract documentation associated with GSA’s new office supplies initiative. GSA and DHS commented on a draft of this report. GSA said it appreciated our recognition that leveraged purchasing can produce savings and also provided technical comments, which we incorporated as appropriate. DHS provided additional information on its strategic sourcing initiatives. GSA estimated that federal agencies spent about $1.6 billion during fiscal year 2009 purchasing office supplies from more than 239,000 vendors. GSA concluded that agency buyers paid higher prices when they bought office supplies outside GSA’s Multiple Award Schedule program than they would have using the schedules or OS II. According to GSA, the price premiums averaged 75 percent compared to the schedule prices and 86 percent compared to OS II. GAO identified data and other limitations in GSA’s study, such as not always controlling for variation in quantities of identical items when comparing prices. GAO was not able to fully quantify the impact of these limitations. Officials from other agencies—Air Force, Army, Navy, and DHS—also questioned the study’s specific findings on price premiums, believing them to be overstated, but their own studies support GSA’s general conclusion that better prices can be obtained through consolidated, leveraged purchasing. The GSA study also concluded that buyers compared prices before making purchases, but this conclusion was based on interviews with senior-level acquisition officials and not on information obtained from any of the approximately 270,000 government purchase cardholders who made the purchasing decisions. According to available data, GSA’s new office supplies sourcing initiative, OS II, has produced savings. GSA estimated that the government saved $16 million from June 2010 through August 2011 through this initiative. According to GSA, the OS II initiative is demonstrating that leveraged buying can produce savings and has provided improvements for managing ongoing and future strategic sourcing initiatives. GSA reports that OS II allowed it to negotiate discounts with vendors who were selected for the initiative, and has spurred price competition among schedule vendors that were not selected as they react to the OS II pricing, resulting in decreased schedule prices. The initiative is also expected to lower government-wide office supply costs through more centralized contract management. Another key aspect of the initiative is that participating vendors provide sales and other information to GSA to help monitor prices, savings, and vendor performance. Finally, the OS II initiative offers lessons learned for other strategic sourcing initiatives, including the importance of identifying agencies’ goals and needs and ensuring buying agency participation.
The 31 DFEs we surveyed were established in various statutes as commissions, boards, authorities, corporations, endowments, institutions, agencies, and administrations. Their heads may be individuals, such as a chairperson or a director, or groups, such as commissions or boards. Individuals and members of commissions and boards are generally appointed by the President and confirmed by the Senate, but members for some entities are set statutorily without additional appointment and confirmation. For instance, the Pension Benefit Guaranty Corporation’s statute sets the corporation’s board members as the Secretary of Labor, Secretary of the Treasury, and Secretary of Commerce, all of whom are appointed to their cabinet positions by the President and confirmed by the Senate. Each year, the Office of Management and Budget (OMB) determines and publishes a list of DFEs and their heads. OMB uses the definition under the IG Act, as amended, for the head of a DFE, which is any person or persons designated by statute as the head of the DFE or if no such designation exists, the chief policymaking officer or board of the DFE. It is important to note that the term governing body for purposes of this report is broad and therefore could include members in addition to the entity head under the IG Act, as amended. Table 1 shows the 31 DFEs categorized by organizational structure and entity head for 2008. Governance can be described as the process of providing leadership, direction, and accountability in fulfilling an organization’s mission, meeting its objectives, and providing stewardship of public resources, while establishing clear lines of responsibility for results. Accountability represents the processes, mechanisms, and other means—including financial reporting and internal controls—by which an entity’s management carries out its stewardship and responsibility for resources and performance. Commonly accepted governance practices for federal entities and nonprofit corporations have significantly evolved since the financial scandals at large public companies in the early 2000s and passage of the Sarbanes-Oxley Act of 2002. The Sarbanes-Oxley Act outlined a framework for more effective corporate governance and introduced reforms to public company financial reporting and auditing. Although the act strengthened corporate governance only in the private sector, the federal government and nonprofit sectors have also strengthened governance and internal control requirements and practices. According to OMB, passage of the Sarbanes-Oxley Act served as an impetus for the federal government to reevaluate its current policies related to internal control over financial reporting and management’s related responsibility. The Inspector General Act of 1978 (1978 IG Act) created offices of inspectors general at major departments and agencies to prevent and detect fraud and abuse in their departments’ and agencies’ programs and operations; conduct audits and investigations; and recommend policies to promote economy, efficiency, and effectiveness. In 1988, the 1978 IG Act was amended to establish additional IG offices in certain federal entities designated by the legislation. Generally, the DFE IGs have the same authorities and responsibilities as those established by the 1978 IG Act, but there is a clear distinction—they are appointed and removed by their agency heads rather than by the President and are not subject to Senate confirmation. The 31 DFE IGs make up about half of all IG offices established under the IG Act, as amended, and in fiscal year 2007 were responsible for oversight with respect to gross agency budgets that ranged from $10 million to $80.2 billion. The IG Reform Act of 2008 (2008 Reform Act) was enacted on October 14, 2008, to, among other things, enhance the independence of the inspectors general. (We have reported several times on independence issues and challenges for the IG community.) Specifically, the 2008 Reform Act provides that both the President and DFE heads must give written reasons to Congress for removing an IG at least 30 days prior to removal. The 2008 Reform Act mandates that IGs shall submit their budget to their entity’s head, who shall include, among other things, an aggregate request for the IG in the entity’s budget proposal to the President. The President must include in the budget of the U.S. government submitted to Congress a separate statement of each IG’s budget request, and the amount requested by the President for each IG. Before the act, only the presidentially appointed IGs and three DFE IGs had such transparency over their budgets. Regarding pay, the presidentially appointed IGs have been paid at Executive Level IV, while the DFE IGs have been paid at the GS-15 grade, Senior Executive Service level, or equivalent salary level determined by their entity. The 2008 Reform Act requires that all presidentially appointed IGs be paid at Executive Level III, plus 3 percent, and all DFE IGs shall be paid at a level at or above the level of a majority of senior executives of the respective DFEs. The 2008 Reform Act was effective upon passage, but it provided up to 180 days to establish the Council of Inspectors General on Integrity and Efficiency. While Congress has historically weighed many political and policy factors in deciding on DFE governance structures, and applied specific accountability requirements to achieve its original objectives, current private sector guidance says that governing bodies need to be large enough to accommodate the necessary skills set, but still small enough to promote cohesion, flexibility, and effective participation. DFEs vary in their statutory size and structure as well as their statutory purpose and requirements for governance. Survey responses showed that the size of DFE governing bodies ranges from 1 to 24 members. Thirteen of the 31 DFEs had at least one vacancy in their governing body. At 2 of the DFEs— the Consumer Product Safety Commission and National Labor Relations Board—active members were outnumbered by vacancies. Only 7 of 29 DFE governing bodies have committees that deal with governance or oversight. Committees can enhance the overall effectiveness of the governing body by ensuring focus and oversight in areas of concern. In order to improve governance and accountability at federal agencies, a variety of laws covering a range of management and administrative practices and processes have been enacted. Of 12 key governance and accountability statutes that we selected for review, 13 of 31 DFEs responded that they are statutorily required to comply with all 12 statutes. Based on the responses of the remaining 18 DFEs, the applicability of the 12 statutes varied, with 1 DFE—the Corporation for Public Broadcasting—stating that it is not subject to any of the 12 key governance and accountability statutes. Some entities that said they are not statutorily required to comply with the statutes indicated that they have adopted the provisions voluntarily or implemented an alternative mechanism to attain the objectives of the statute. Finally, in relation to the governing body’s effectiveness, 19 of the 29 DFEs surveyed reported having orientation programs for the new governing body, while only 10 DFEs reported having ongoing training for governing body members. Orientation and training programs for governing body members aimed at providing information on governance practices and the regulatory environment are important for the DFE governing body’s ability to carry out its responsibilities effectively and efficiently. Corporate governance guidelines in the private sector state that governing bodies should establish committees that will enhance their overall effectiveness by ensuring focus and oversight for areas of concern. Our work shows that few DFEs reportedly have audit committees, none have an ethics committee, and only a limited number have orientation and ongoing training for governing body members, which is inconsistent with the governance practices established in other sectors such as public companies or nonprofits. Congress has over many decades weighed a variety of political and policy considerations, such as political independence and accountability, efficiency, and specific entity missions, in deciding on DFE governance structures, and applied specific accountability requirements, such as governing body appointment and removal authorities and governing body public meeting requirements, to achieve its original objectives. Current private sector guidance says that governing bodies need to be large enough to accommodate the necessary skills set, but still small enough to promote cohesion, flexibility, and effective participation. The DFE’s governing bodies range in size from 1 to 24 members. For comparison, according to the 2006 edition of the annual Directors’ Compensation and Board Practices report by The Conference Board, the median board size of publicly traded corporations, depending on the industry, ranges from 9 to 11 members. Of the 31 DFEs, only the Corporation for Public Broadcasting, Legal Services Corporation, and United States Postal Service statutorily have 9 to 11 governing body members. Three entities— the National Science Foundation, Smithsonian Institution, and Appalachian Regional Commission—statutorily have more than 11 members, while the remaining 25 DFEs have 8 or fewer governing-body members. (See table 2.) At the time of our review, vacancies reportedly outnumbered active members on the governing bodies of the Consumer Product Safety Commission (CPSC) and the National Labor Relations Board. In recent years, Amtrak and the Federal Election Commission have also had significant vacancies. In January 2008, four of the six commissioner seats for the Federal Election Commission were vacant. Over the past several years the number of active board members at Amtrak has fluctuated, and at least twice—between December 2007 and March 2008 and between October 2003 and June 2004—the board had only two voting members (excluding the Secretary of Transportation or his designee). Without the minimum number of members required to conduct business, a board may be legally unable to make certain decisions. For instance, the Federal Election Commission’s enabling legislation requires that four of its six commissioners be present for certain entity business to be carried out. Also, the National Endowment for the Humanities and the National Endowment for the Arts governing bodies, which are single-member governing bodies, are currently vacant. According to The Conference Board’s corporate governance guidelines, corporate boards should be structured so that the composition and skill set of a board is appropriate based on the corporation’s particular challenges and strategic vision. The size of a governing body is important not only for establishing the necessary range of skills, but in promoting cohesion, flexibility, and effective participation of the members to achieve their governance objectives. Generally, the membership of DFE governing bodies is defined by the DFE’s authorizing legislation, with many DFE governing body members appointed by the President, with the advice and consent of the Senate. For instance, the Pension Benefit Guaranty Corporation’s governing body is statutorily composed of three members—the Secretary of Labor, Secretary of Treasury, and Secretary of Commerce. The Secretary of Labor is the chairperson and entity head under the IG Act. The Appalachian Regional Commission is statutorily composed of governors of the 13 Appalachian states and a federal cochair. The Smithsonian Board of Regents is statutorily composed of the Vice President, the Chief Justice of the United States, three members of the Senate, three members of the House of Representatives, and nine other members not from Congress. In order to improve governance and accountability at federal agencies, a variety of laws covering a range of management and administrative practices and processes have been enacted. We identified 12 statutes as key to governance and accountability. The statutes, which are described in Appendix III, cover funds control, performance and financial reporting, accounting and internal control systems, human resources management, and recordkeeping and access to information. They are the: Anti-Deficiency Act (ADA), “Purpose Statute” (31 U.S.C. § 1301(a)), Improper Payments Information Act of 2002 (IPIA), Accountability of Tax Dollars Act of 2002 (ATDA), Government Performance and Results Act of 1993 (GPRA), Federal Managers’ Financial Integrity Act of 1982 (FMFIA), Federal Information Security Management Act of 2002 (FISMA), Travel, Transportation, and Subsistence (5 U.S.C. Chapter 57), Whistleblower Protection Act (WPA), Ethics in Government Act of 1978 (Ethics), Freedom of Information Act (FOIA), and Government in the Sunshine Act (Sunshine). Based on results from a data request we sent to the DFEs, table 3 shows 13 of 31 reported that they are subject to all 12 key governance statutes. Reponded as subject to the te. In responding to our data request, several DFEs indicated that although they are not required to comply with a particular statute, they are in essence following the statute, having adopted the provisions of the statute voluntarily or implemented an alternative mechanism to attain the statute’s objectives. (See Appendix IV.) Corporate governance guidelines in the private sector state that governing bodies should establish committees that will enhance their overall effectiveness by ensuring focus and oversight for areas of concern. In the private sector, statutes and standards require that public company boards of directors maintain certain standing committees, such as audit, nominating, ethics, and compensation. In addition, governing bodies have established committees to focus on issues or particular concerns of the board such as risk, technology, public policy, and corporate governance. Committees handle specific issues or topics and usually make policy recommendations for the full board to consider. Most DFEs do not have governance or internal oversight committees. However, DFEs, like all federal entities, do receive oversight by congressional committees. Of the 29 DFEs responding to our survey, only 7—the Corporation for Public Broadcasting, Election Assistance Commission, Federal Reserve Board, Legal Services Corporation, National Science Foundation, Smithsonian Institution, and United States Postal Service—indicated that they have committees or advisory panels for enhancing governing body effectiveness that are commonly found in public companies or nonprofit organizations. As shown in table 5, 5 of those 7 have audit committees. None of the 29 governing bodies responding to our survey reported having a standing ethics committee. (See table 5.) Some federal entities have applied private sector corporate governance guidelines for oversight committees in response to recent governance challenges or reports on governance and accountability practices. Some of these challenges have even resulted in board reorganization and other governance changes. For instance, in response to an IG report, the Corporation for Public Broadcasting created a governance committee for its board and revised the board’s by-laws to clarify the board’s and president’s roles. In response to a GAO report, the Legal Services Corporation created an audit committee and also added the responsibilities of corporate governance to its Performance Review committee, which was renamed Governance and Performance Review. Based on recommendations of the Smithsonian Institution Board of Regents’ Governance Committee, the board adopted a set of duties and responsibilities for all regents, examined the board structure, and appointed new leadership for each committee. In the last 3 years, the United States Postal Service has added the role of governance to the responsibilities of its Strategic Planning Committee, added the Government Relations and Regulatory Committee, and developed a plan to comply with the Postal Accountability and Enhancement Act. The Board of Governors of the Federal Reserve combines functions of finance, budget, performance review, and operations in its Board Affairs Committee. According to The Conference Board’s Corporate Governance Handbook 2007, a company board’s responsibility typically includes: monitoring and evaluating senior management, reviewing and approving management’s strategic and business plans, reviewing and approving the entity’s risk management program, reviewing and approving financial objectives and plans, monitoring the entity’s performance against the strategic plan, and helping to ensure ethical behavior and compliance with laws and regulations. The Corporate Governance Handbook 2007 further states that a company board’s effectiveness depends on the quality and timeliness of information received in order to make informed decisions and perform its oversight function. Governing bodies establish committees to enhance the overall effectiveness of the board by ensuring focus on and oversight of matters of particular concern. Since the IG is responsible for preventing and detecting fraud and abuse, conducting audits and investigations, and recommending policies to promote economy, efficiency, and effectiveness, the work of an IG at an entity can benefit the governing body, particularly governing body and committee efforts to focus on issues and provide oversight of the entity. Because single-member governing bodies and other noncorporate entity governing bodies have many of the same responsibilities as corporate boards of directors, we believe that public company and nonprofit corporation governance practices may provide benefits to those governing bodies. Only five of the DFE governing bodies indicated they have an audit committee, which is one of the key elements in effective corporate governance. According to the National Council on Nonprofits Association, an audit committee provides independent oversight of the organization’s accounting and financial reporting and oversees the organization’s annual audits. In the private sector, an audit committee is generally responsible for the appointment, compensation, and oversight of the external auditor; handling board communication with the external auditor regarding financial reporting matters; and overseeing the entity’s financial reporting and the adequacy of internal control over financial reporting. In the federal government environment, the audit committee could also provide a key venue for the IG’s role in governance and in communicating with those charged with governance. Unless provided otherwise, the IG is responsible for conducting or overseeing the annual agency audit. New auditing standards reinforce the importance of communication between the financial auditor and those overseeing the organization’s governance. The auditing standards require that the auditor communicate with those charged with governance, who have the duty to oversee the strategic direction of the entity and obligations related to the accountability of the entity. The standards recognize that multiple parties may be charged with governance including oversight bodies, members of legislative committees, boards of directors, audit committees, or parties contracting the audit. Without an audit committee, organizations may find it more difficult to ensure that weaknesses found during the financial audit as well as IG recommendations are addressed properly. None of the DFE governing bodies has a separate standing ethics committee. An ethics committee is responsible for ensuring that the organization has systems in place to provide assurance over employee compliance with the organization’s code of conduct and ethics. According to Standards for Internal Control in the Federal Government, a positive control environment includes integrity and ethical values that are provided by leadership through setting and maintaining the organization’s ethical tone, providing guidance for proper behavior, removing temptations for unethical behavior, and providing discipline when appropriate. The New York Stock Exchange requires that an ethics committee function be contained within the audit committee of listed companies. Although audit and ethics committees are accepted governance practices, in order to determine whether a governing body should create these committees, consideration should be given to the entity’s structure, size, mission, and risk. Nineteen of the 29 DFEs responding to our survey reported having orientation programs for new governing body members, and at least 15 of the 19 programs reportedly provide key information on oversight and governance issues, such as governing body policies and communications with management. Seventeen DFEs reported that the roles and duties of their entities’ IGs are included in the orientation program. Of 9 DFEs that reported having ongoing training for governing body members covering topics such as the fiduciary duty of board members and role of the IG, only 5 addressed some of the statutory requirements and oversight topics— such as Government in the Sunshine Act and Freedom of Information Act, travel policy, and ethics—considered necessary to keep board members updated on current federal government and management practices. DFEs are organizations unique in their missions, entity structure, governing body and oversight framework, and budget. They are also subject to varying governance and accountability statutes. Therefore, orientation and training can be especially important for new governing body members from the private sector who have not worked in the federal government and may not be familiar with the federal government statutes and environment, particularly the role of the IG and how the IG can assist the board in achieving its oversight duties. The initial training and orientation of new governing body members is a critical area for the governing body due to the significance of the stewardship, oversight, and potential fiduciary responsibilities of individual governing body members and the governing body as a whole. Current commonly accepted practice for public companies and nonprofit corporations is to provide board members with a broad-based orientation that encompasses the organization’s mission, vision, and strategic plan; its history; the members’ obligations and performance objectives; board policies on meetings and attendance; and board member job descriptions, including performance expectations and fiduciary obligations. Orientation and training programs help governing bodies to stay current with information on governance practices and the regulatory environment. In addition, a governing body needs to be kept up to date on key management practices and requirements in such areas as risk assessment and mitigation, internal controls, and financial reporting so that the governing body can oversee management’s key processes. As the governing body’s operating environment changes, new issues—whether regulatory, current practice, or industry specific—emerge with the changes. The orientation and training programs could help members of the governing body identify and address the new issues. According to The Conference Board’s corporate governance guidelines, governing bodies should meet regularly and focus principally on broader issues, such as corporate philosophy and mission, broad entitywide policy, strategic management, oversight and monitoring of management, and company performance against business plans. Of those we surveyed, the number of meetings that the 25 DFE governing bodies with more than 1 member held each year varied greatly from 2005 through 2007 (see table 6). It is critical that the number and length of governing body meetings allow the governing body members to appropriately fulfill their stewardship, oversight, and potential fiduciary duties, which include providing active oversight of the entity’s strategy implementation and risk management. The IGs were created equally under the IG Act, as amended; however, the entities’ structures, governance practices, and policies and procedures vary, thereby affecting the role of the IG. These variances can be seen in different ways including the IG reporting relationship, budget or spending authority, and the entity’s governing body and management response to IG recommendations. The IG Act, as amended, requires that DFE IGs report to and be under the general supervision of their entity head. Most of the DFE IGs we surveyed report to the highest levels in their entities, a structure that helps to safeguard IG independence in accordance with the IG Act and generally accepted government auditing standards. GAO’s Internal Control Management and Evaluation Tool states that the IG should have sufficient levels of competent and experienced staff and that the responsibilities, scope of work, and audit plans of the IG should be appropriate to the agency’s needs. The IG surveys also showed that most DFE IGs had limited control over their resources and that their budgets and staffing were not always adequate to perform audits or investigations related to the missions or management challenges of their entities. Government Auditing Standards state that restrictions on funds or other resources provided to the audit organization can impair independence and adversely affect the organization’s ability to carry out its responsibilities. Nine of the 31 DFE IGs who responded to our survey stated that they need approval from entity management for spending on specific activities such as travel and contracting and 12 DFE IGs responded that they need entity approval to hire staff. Management responsiveness to IG recommendations is another critical factor that can influence the effectiveness of IG oversight and the effect of IG work. IG responses to the survey showed that management responsiveness to recommendations and audit resolution activities also varied, with some DFE IGs reporting that agency responsiveness to recommendations was lacking. One entity reported having 117 outstanding recommendations, some dating to 1998. Only 10 DFEs reported that their governing bodies have written policies for monitoring the implementation of IG recommendations. Nine of those 10 have policies that require the governing body to respond in writing acknowledging the recommendations and to develop a plan to address them. Audit and oversight committees, which can help oversee implementation of recommendations, could assist IGs in providing effective oversight and actively tracking and resolving recommendations. The IG Act, as amended, requires that DFE IGs report to and be under the general supervision of their entity heads. The IG Act also requires IGs to perform audits in compliance with Government Auditing Standards, which state that for a government internal audit function to be independent, the head of the audit organization must be accountable to the head or deputy head of the government entity or to those charged with governance and be located organizationally outside the staff or line management function of the unit under audit. Without any other safeguards, the independence of an IG who must report audit or investigative findings in areas under the direct responsibility of his or her supervisor may be impaired in both fact and appearance. Twenty-nine of the 31 IGs we surveyed responded that they report either to their entity head or the entity governing body. Table 7 shows that 16 IGs responded that they meet with their entity heads at least weekly or monthly and 12 meet with them quarterly. Government Auditing Standards state that the internal audit organization, such as the IGs, should report regularly to those charged with governance. Six of the 31 IGs responded that their entity had an audit or other oversight committee that they meet with and 4 indicated that they met with the committee quarterly (See table 8). Government Auditing Standards state that multiple parties may be charged with governance, including oversight bodies, boards of directors, audit committees, or parties contracting for the audit. Since those charged with governance have the duty to oversee the strategic direction of the entity and obligations related to the accountability of the entity, the IG’s regular communication with the audit or other oversight committee is important for the committee to carry out its governance duties. Government Auditing Standards state that audit organizations must be free from external impairments to independence. External impairments occur when auditors are deterred from acting objectively and exercising professional skepticism by actual or perceived pressures from management and employees of the entity. For example, an IG’s lack of control over the budgetary resources from its entity, such as the entity head restricting funds or other resources to the IG, can impair an IG’s independence and ability to carry out its responsibilities. Separate appropriation accounts for IGs can help provide transparency about the amount of the IG’s budget and reveal trends in resources provided to them. However, until passage of the 2008 Reform Act, there was no statute, including the IG Act, requiring separate appropriations accounts for all DFE IGs. Three DFE IGs have a separate appropriation account or line item in the Budget of the U.S. Government (Legal Services Corporation, National Science Foundation, and Federal Reserve Board). Twenty-six of 31 DFE IGs responding to the survey reported that they developed or oversaw development of their budgets, with 8 of the 26 receiving guidance from entity management which the survey responder indicated limited the size of the original request. Eight DFE IGs reported that they needed approval from entity management to spend funds for purchases, travel, training, and other IG activities (see table 9). Of the entities listed in table 9, the National Endowment for the Arts, National Endowment for the Humanities, National Archives and Records Administration, and the Consumer Product Safety Commission IGs indicated they have never had a problem obtaining additional funds when necessary. IGs at the Federal Labor Relations Authority (FLRA) and U.S. International Trade Commission, however, informed us that they had not been able to obtain funding for staff. A recent peer review of FLRA, for instance, found that the IG did not perform the required FISMA evaluations in 2006 and 2007 because management had not responded to the IG’s requests for funds to hire contract auditors. The peer reviewer recommended that the FLRA IG provide a copy of the peer review report to FLRA management and that the FLRA IG use the peer review report to seek assistance from other oversight bodies—including the appropriate subcommittees of Congress and OMB—for help in addressing the existing impairments to independence. The 2008 Reform Act mandates that IGs shall submit their budget to their entity’s head, who shall include, among other things, an aggregate request for the IG in their agency budget proposal to the President. The President must include in the budget submitted to Congress a separate statement of each IG’s budget request, and the amount requested by the President for each IG. This should provide more transparency to the IG budget process. GAO’s Internal Control Management and Evaluation Tool states that in assessing office of inspector general internal controls, the IG should consider whether it has sufficient levels of competent and experienced staff and that the responsibilities, scope of work, and audit plans of the IG should be appropriate to the agency’s needs. In fiscal year 2008, the 31 DFE IGs had budgets ranging from $331,000 to $233,300,000, with 5 having budgets $500,000 or under and 12 having budgets under $1,000,000. In addition to the IGs’ overall mandate to prevent and detect waste, fraud, and abuse and to promote economy and efficiency, specific audit work may arise from legal mandates, requests from entity management, requests from Congress, or from discretionary work deemed necessary by the IG. The IGs also reported that the percent of IG work spent on mandatory audits ranged from 0 to 100 percent. All 19 IGs who responded that their agencies are subject to the Accountability of Tax Dollar Act of 2002 (ATDA) reported that funding for the entity’s financial statements came from their IG budgets. In fiscal year 2008, 15 of 31 DFE IGs reported having 5 or fewer staff. Twelve of the 31 IGs responded that they need entity approval to hire staff. Limited staffing may affect the ability of the IG to conduct the full range of audits required by its mandate (See table 10). Twenty of 31 IGs reported they had their own full or part-time General Counsel. The IG offices that did not have their own General Counsel had 5 or fewer staff, except for Peace Corps, which had 17. Of those that did not have their own General Counsel, all but FLRA used a member of their entity’s General Counsel staff. FLRA used the General Counsel of another entity’s Office of Inspector General. Absent adequate safeguards, cases where the IG has no access to General Counsel other than that internal to entity management could pose a potential impairment to IG independence. GAO’s Internal Control and Management Evaluation Tool states that in assessing an entity’s internal controls, the entity should consider whether its IG regularly provides recommendations to management that are evaluated and implemented when appropriate. The tool also considers whether agency management has a mechanism to ensure prompt resolution of findings and recommendations from audits and other reviews. According to their survey responses, IGs made recommendations ranging in number from 0 to 593 in 2007. A number of the IGs we interviewed stated that agency responsiveness to IG and financial audit recommendations was lacking. One entity had 117 recommendations outstanding, some dating to 1998. Audit or advisory committees, which can play an oversight role in tracking and resolving recommendations, exist at only seven of the DFEs. Ten of the 29 DFEs that responded to our survey reported that their governing bodies have written policies for monitoring the implementation of IG recommendations. Nine of those 10 have policies that require the governing body to respond in writing acknowledging the recommendations and develop a plan to address them. Eight of the 10 also require that the governing body provide a time frame for implementing the IG recommendations and that the IG make a determination about whether the recommendations have been implemented. The Report Consolidation Act of 2000, as implemented by OMB Circular No. A-136, Financial Reporting Requirements, requires that IGs of executive agencies summarize the most serious management challenges faced by their entities and assess their entities’ progress in addressing these challenges. The challenges and any responses from the head of the agency are to be included in the agency’s Performance and Accountability Report (PAR). Twenty-four DFE IGs developed a list of management challenges annually for their entities, while Amtrak, Election Assistance Commission, Federal Reserve Board, National Credit Union Administration, Postal Regulatory Commission, and Smithsonian Institution reported they did not. Of those who prepared management challenges, 10 reported them in both their semiannual reports and their Performance and Accountability Reports. Another 10 documented management challenges only in their entity’s PAR and 2 reported them only in the IG semiannual report. Some entities documented their list of challenges in multiple places. The Legal Services Corporation IG did not report management challenges in either the semiannual report or the PAR, neither of which it is required to issue, but included them in the IG’s strategic plan. Despite the modernization of governance structures and practices that have occurred in the private sector in recent years, many DFEs, while similar to private corporations and nonprofits, have not updated their governance structures and practices. Therefore, the DFEs lag in commonly accepted governance practices, such as the use of audit committees, ethics committees, and orientation and training of governing body members. For entities using funding from taxpayers and donors, effective governance, accountability, and internal control are keys to maintaining trust and credibility. Although the DFE IGs receive equal treatment under the IG Act, as amended, variations in governance structures and practices among the entities create differing environments for them. Governance structures and practices can aid or hamper the work of the IGs, which were created by Congress to provide oversight and enhance the effectiveness of the mission of these entities. Reviewing and updating their governance structures, and the IG’s role, can provide DFE governing bodies with the opportunity to determine how to best use the IGs to enhance accountability and improve overall governance. As the 2008 Reform Act is implemented, some of the issues identified in our survey, such as lack of budget transparency and lack of control over budgets, may be mitigated. We are not making specific recommendations in this report, but are providing this information for consideration in future oversight of DFEs and their IGs. The information on governance structures and practices provided in this report can help inform continuing work to improve the effectiveness of government, such as the new IG Council established under the 2008 Reform Act, which can also use this information in its role of promoting and supporting the effectiveness of the IG community and fostering governmentwide efforts to improve management. The information provides a basis for beginning discussions on the governance structures and practices as well as the IG role, but additional individual entity analysis that considers entity structure, size, mission, and risk should be completed in order to determine whether the governance or IG practice would provide value. We requested comments on a draft of this report from all 31 DFE entity heads and IGs. Of the entity heads and IGs responding, a number provided technical comments that we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. We will then send copies of this report to other appropriate congressional committees, the DFE entity heads, and the DFE IGs. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions concerning this report, please contact me at (202) 512-2600 or franzelj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in Appendix V. Our reporting objectives were to describe (1) the statutory structure of the governing body for each designated federal entity (DFE) organization and (2) the inspector generals’ (IG) roles within the governance structure and management of their respective entities. We conducted this engagement from September 2007 to January 2009 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objective. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusion. To obtain the information needed for our two reporting objectives, we reviewed and summarized information from a variety of sources, including the enabling legislation of each DFE; the IG Act, as amended; the 2007 Performance and Accountability Report (PAR) or 2007 Annual Report of each DFE; the Office of Management and Budget’s (OMB) FY 2007 list of designated federal entities and federal entities; and prior GAO reports on inspectors general, accountability, and governance at the DFEs. Based on prior work, we identified relevant current private sector guidance for governance that included the following: The Conference Board, Corporate Governance Handbook, 2007: Legal Standards and Board Practices. National Council of Nonprofit Associations, Financial Accountability Lipman, F.D. and L.K. Lipman, Corporate Governance Best Practices: Strategies for Public, Private, and Not-For-Profit Organizations. American Bar Association, Guide to Nonprofit Corporate Governance in the Wake of Sarbanes-Oxley. Organization for Economic Cooperation and Development (OECD), OECD Principles of Corporate Governance. We also conducted a survey of the DFE entity heads and DFE IGs and conducted follow-up interviews as needed. We also submitted a data request of the DFE general counsels to ascertain whether their entities were statutorily required to comply, voluntarily complied, or do not follow 12 key governance and accountability statutes that we selected for review. We express no opinion on the applicability of the 12 statutes selected to any of the DFEs. We summarized survey, data request, and interview results for entity head and management control and supervision over the IG, budgets, use of resources, and other operational issues. We identified key factors regarding the effectiveness of the IG and summarized survey results and other information for impact on IG effectiveness. Much of the data presented in this report were obtained from the two surveys directed to the DFE entity heads and the DFE IGs. The DFE entity head survey included questions about governing body committees, meetings, orientation, training, financial statement audits, IG oversight, and internal controls. The DFE IG survey included questions about IG experience, staffing, budget, supervision, salary, communications, and resources. Since the population for both samples was known to be 31, we surveyed all DFE entity heads and DFE IGs. We identified inquiry areas based on the congressional request, previously conducted literature searches on governance responsibilities and structures, and our prior internal experience and reporting on related topics. A listing of numerous relevant publications is printed as appendix VI. We conducted a pretest of our questionnaire for DFE entity heads and for DFE IGs. We directed our DFE entity head survey to the entity head designated by OMB under the IG Act, as amended. We directed our DFE IG survey to the IG for each entity. We e-mailed the entity head questionnaires on March 25 and 26, 2008, and the IG questionnaires on March 25, 2008. Those entity heads not completing the questionnaire were e-mailed replacement questionnaires on April 16, 2008. Those IGs not completing the questionnaire were e- mailed replacement questionnaires on April 11, 2008. On May 21, 2008, we also made follow-up phone calls to nine entity heads and three IGs who had yet to complete the survey. We received 29 of 31 entity head questionnaires and all 31 of the IG questionnaires as of September 16, 2008. We also augmented our work by conducting a data request to obtain information from DFE general counsels. The data request included 12 questions about key governance and accountability statutes we selected and whether the entity was statutorily required to comply with, voluntarily complied with, or was neither statutorily required to follow nor voluntarily chose to comply with the statute. To determine the key governance and accountability statutes for our data request, we reviewed relevant prior GAO reports and compared published governance practices to the statutes. We directed our data request to the general counsel of the individual DFEs. We e-mailed the data requests on June 17 and 18, 2008. Those general counsels not completing the data request were e-mailed replacement data requests on August 1, 2008. On August 19, 2008, we also made phone calls to five general counsels. We received all 31 data requests. Statutory inspectors general were established by Congress after a series of events in the 1970s that included: a 1975 study by a subcommittee of the House of Representatives that found inadequacies in internal audit and investigative procedures in the Department of Health, Education, and Welfare, and a 1977 study by the House Intergovernmental and Human Resources Subcommittee that found serious deficiencies in a number of audit and investigative efforts including a systemic lack of (1) central leadership for audits and investigations, (2) auditor and investigator independence, (3) procedures to ensure Congress would be informed of serious problems, and (4) programs that looked for possible fraud or abuse. The Inspector General Act of 1978 (IG Act) was intended to address these issues by providing for independent IGs appointed by the President. The act charged the IGs with conducting and supervising audits and investigations; recommending policies to promote economy, efficiency, and effectiveness; and preventing and detecting fraud and abuse in their agencies’ programs and operations. IGs are also required to report on the results of their audits and investigations and prepare semiannual reports to agency heads and the Congress. Between 1978 and 1988, Congress passed legislation to establish statutory IGs in 8 additional agencies. The House Subcommittee on Legislation and National Security, Committee on Government Operations asked GAO to study the internal audit capabilities of smaller federal agencies. In May of 1984 GAO issued Status of Internal Audit Capabilities of Federal Agencies Without Statutory Inspectors General. Based on 99 responses to surveys of 105 federal organizations, GAO uncovered many of the issues that led to the establishment of 12 IGs in the IG Act. These included auditors supervised by officials responsible for the programs under review, leading to lack of auditor independence; inadequate audit coverage of vulnerable agency operations; lack of evaluation of significant fraud problems; and audit resolution and follow-up systems that did not meet government requirements. In a June 1986 follow-up report, Nonstatutory Audit and Investigative Groups Need to Be Strengthened, GAO reviewed 41 agencies without statutory IGs and found lack of independent and sufficient audit capabilities within agencies continued to be a problem. In its conclusion to the report, GAO supported legislation that had been recently introduced in Congress that would extend IG Act protections and requirements to most existing executive branch audit units. The Inspector General Act Amendments of 1988 and the Government Printing Office Inspector General Act of 1988 established statutory IGs in 5 additional departments and agencies, the Government Printing Office, and 33 designated federal entities (DFE) listed in the act. Under the 1988 amendments, the IGs established in the 5 departments and agencies were to be appointed by the President with Senate confirmation while the DFE IGs were to be appointed by entity heads. Various other statutes since 1978 have amended the IG Act to add or remove entities required to have IGs. Since the designated federal entities (DFEs) were established with different missions and during different years, the statutory requirements for the identified key governance and accountability statutes vary. Following are the key governance and accountability statutes identified that cover funds control and budgeting, performance and financial reporting, accounting and internal control systems, human resources management, and recordkeeping and access to information. Antideficiency Act (codified as amended in 31 U.S.C. 1341, 1342, 1351, and 1517)—Prohibits officers and employees of the government from obligating or expending funds in advance of or in excess of appropriations. Purpose Statute (31 U.S.C. § 1301(a))—Requires federal agencies and all U.S. government corporations, both mixed ownership and wholly owned, to use appropriated funds only for the purposes provided in the law. Improper Payments Information Act of 2002 (Public Law 107- 300)—Requires agencies to identify susceptible programs and activities, estimate their improper payments, and report on actions to reduce improper payments. Accountability of Tax Dollars Act of 2002 (Public Law 107-289)— The Chief Financial Officers Act of 1990 (CFO Act), as amended by the Government Management Reform Act of 1994 (GMRA), requires the 24 agencies of the federal government covered by the CFO Act, including some independent agencies, to submit annual audited financial statements to the Office of Management and Budget (OMB) and Congress. The financial statements must be prepared in accordance with generally accepted accounting principles and audited in accordance with generally accepted government auditing standards. The Accountability of Tax Dollars Act of 2002 (ATDA) expanded this requirement to include most other federal executive agencies. Government Performance and Results Act of 1993 (Public Law 103-62)—Requires an annual performance report. The annual performance report shall reflect, among other things, the agency’s or corporation’s progress in achieving the performance goals set out in its annual performance plan, which implements a mandatory longer-term strategic plan. Federal Managers’ Financial Integrity Act of 1982 (FMFIA) (31 U.S.C. 3512 (c), (d))—Provides the statutory basis for management’s responsibility for and assessment of internal control. OMB Circular No. A-123, Management’s Responsibility for Internal Control (rev. Dec. 21, 2004), sets out the guidance for implementing the statute’s provisions, including agencies’ assessment of internal control under the standards prescribed by the Comptroller General. Agencies are required to annually provide a statement of assurance on the effectiveness of internal control. U.S. government corporations are not subject to FMFIA, but they are subject to similar requirements under the Government Corporation Control Act, which incorporates by reference the FMFIA standards in requiring U.S. government corporations to include in their annual management reports a statement on internal accounting and administrative control systems. Federal Information Security Management Act of 2002 (FISMA) (Public Law 107-347)—Requires the development and implementation of an entitywide information security program. As part of that program, FISMA requires entity heads to periodically (1) perform risk assessments of the harm that could result from information security problems, such as the unauthorized disclosure or destruction of information; (2) test and evaluate the effectiveness of elements of the information security program; and (3) provide security awareness training to personnel and contractors. FISMA also requires the federal entity to annually have its IG or an external auditor perform an independent evaluation of the entity’s information security programs and practices to determine their effectiveness and to annually submit a report on the adequacy and effectiveness of information systems to OMB, GAO, and Congress. Travel, Transportation, and Subsistence (5 U.S.C. Chapter 57) and Federal Travel Regulation—Statutory requirements and executive branch policies for travel by federal civilian employees and others authorized to travel at government expense. Whistleblower Protection Act (5 U.S.C. 2302)—Provides certain protections to employees of federal agencies and, to a limited extent, U.S. government corporations, when they engage in “whistleblowing,” which involves reporting evidence of illegal or improper federal employer activities to the relevant authorities. Ethics in Government Act of 1978(Public Law 95-521)—Governs ethical conduct, including public financial disclosure requirements, and limits outside earned income and activities. Freedom of Information Act (5 U.S.C. 552)—Requires that federal entities make their records available for public inspection and copying unless one of the listed FOIA exemptions applies, such as for records pertaining to medical files, internal personnel practices, or trade secrets. Government in the Sunshine Act(5 U.S.C. 552b; Public Law 94- 409)—Requires that all board meetings, including meetings of any executive committee of the board, must be open to public observation, unless an exception applies. This appendix contains profiles of the 31 designated federal entities (DFEs) and their offices of inspectors general (IG). The National Railroad Passenger Corporation was statutorily established to meet the nation’s intercity passenger transportation needs. Amtrak’s board statutorily consists of seven voting members and one ex- officio, nonvoting member (the President of Amtrak). The voting members are appointed by the President and confirmed by the Senate for a 5-year term. The President may choose to appoint the Secretary of Transportation to be a voting member. The Secretary of Transportation does not require the advice and consent of the Senate. As of October 2008, Amtrak had five voting board members including the Secretary of Transportation, and two vacancies. Audit and Finance; Government Relations, Legal, and Corporate Affairs; Personnel and Compensation; Security, Safety, and Environmental Affairs; and Service Development, Marketing, Product Management and Customer Service. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were 166 recommendations for which management action was still needed. The Appalachian Regional Commission is a federal-state partnership that works with the people of Appalachia to create opportunities for self- sustaining economic development and improved quality of life. The commission’s purpose is to reduce the substantial socioeconomic gaps between Appalachia and the rest of the nation. The commission, a federal program, attempts to reduce these gaps by awarding grants to various projects such as workforce training, highway construction, small business start-up assistance, and education programs. The ARC has a 14-member commission composed of a cochairman and the governors of 13 Appalachian states. The federal cochairman is appointed by the President and confirmed by the Senate. The governors select a state cochairman from their number. The commission has an executive director responsible for carrying out the administrative functions of the commission and directing commission staff. Only the federal cochairman and his or her staff are federal employees. None. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were six outstanding recommendations. The Federal Reserve System (Federal Reserve), the central bank of the United States, is charged with conducting the nation’s monetary policy. Through its supervisory and regulatory banking functions, the Federal Reserve maintains the safety and soundness of the nation’s financial system. The Federal Reserve also maintains the stability of the financial system and provides services to depository institutions, the U.S. government, and foreign official institutions. The seven members of the Board of Governors of the Federal Reserve System are appointed by the President and confirmed by the Senate. A full term is 14 years. A member who serves a full term may not be reappointed. The chairman and the vice chairman of the board are designated by the President from among the members and are confirmed by the Senate. They serve a term of 4 years. A member’s term on the board is not affected by his or her status as chairman or vice chairman. The Committee on Board Affairs combines functions of finance, budget, performance review, and operations. Regulations are assigned to the committees on Supervisory and Regulatory Affairs and Consumer and Community Affairs. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were 16 outstanding recommendations. The Broadcasting Board of Governors oversees all U.S. government and government-sponsored, nonmilitary, international broadcasting. These functions are carried out by the individual BBG international broadcasters: the Voice of America, Alhurra, Radio Sawa, Radio Farda, Radio Free Europe/Radio Liberty, Radio Free Asia, and Radio and TV Marti, with the assistance of the International Broadcasting Bureau. The board has nine members, eight appointed by the President and confirmed by the Senate, and the Secretary of State. The President appoints one member as chairman subject to the advice and consent of the Senate. No more than four members, excluding the Secretary of State, may be of the same political party. Members serve 3 years, excepting the Secretary of State, and receive compensation for time spent on BBG matters at the Level IV rate of the Executive Schedule. The Secretary of State does not receive any compensation for service to the board. All members are eligible for expense related to travel. Voice of America; International Broadcasting Bureau; Office of Cuba Broadcasting; Radio Free Europe/Radio Liberty; Radio Free Asia; Middle East (MBN); Personnel; and Language Service Review. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were no outstanding recommendations. The Commodity Futures Trading Commission protects market users and the public from fraud, manipulation, and abusive practices related to the sale of commodity futures and options, and fosters open, competitive, and financially sound futures and option markets. The President appoints, and the Senate confirms, five commissioners with demonstrated knowledge in futures trading or its regulation, or the production, merchandising, processing, or distribution of one or more of the commodities or other goods and articles, services, rights, and interests covered by 7 U.S.C. Chapter 1. No more than three commissioners can be of the same political party, and one commissioner is appointed as the chairman by the President, by and with the advice and consent of the Senate. Commissioners serve 5-year terms and generally serve until their successor is appointed and qualified. The chairman serves at the pleasure of the President. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There was one outstanding recommendation. The Consumer Product Safety Commission protects the public against unreasonable risks of injury from consumer products; assists consumers in evaluating the comparative safety of consumer products; develops uniform safety standards for consumer products and minimizes conflicting state and local regulations; and promotes research and investigation into the causes and prevention of product-related deaths, illnesses, and injuries. There are five commissioners, who are appointed by the President, with the advice and consent of the Senate. Commission members can be removed by the President for neglect of duty or malfeasance in office, but for no other reason. Commissioners are appointed to 7-year terms, with any vacancies filled for the remainder of the term. No more than three members may be of the same political party. The chairman is appointed by the President from among the members of the commission and confirmed by the Senate. The commission elects a vice chairman annually to act in the absence or disability of the chairman or in the case of a vacancy in the office of the chairman. The chairman, subject to commission approval, appoints the various officers for the commission’s operations. At least 30 days before the beginning of each fiscal year, the commission must establish an agenda for commission action. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were no outstanding recommendations. The Corporation for Public Broadcasting is a steward of the federal government’s investment in public broadcasting. It helps support the operations of more than 1,000 locally owned and operated public television and radio stations nationwide and is a source of funding for research, technology, and program development for public radio, television, and related on-line services. The CPB board has nine members, appointed by the President with advice and consent of the Senate to terms of 6 years. No more than five may be members of the same political party. Annually, the board elects a chairman from its members as well as one or more vice chairmen. The board also selects the president of the corporation and appoints other corporate officers. A member whose term has expired may serve until his successor has taken office or until the end of the calendar year, whichever comes first. No member may serve in excess of two consecutive terms. The members of the board are not considered officers or employees of the United States. Members receive $150 per day for meetings and board work, including travel time and are reimbursed for actual, reasonable, and necessary expenses. No member may receive compensation of more than $10,000 in any fiscal year. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were six outstanding recommendations. The Denali Commission is a federal-state partnership designed to provide critical utilities, infrastructure, and economic support throughout Alaska. There are seven board members including the federal cochair. Six of the seven positions are statutorily defined as the Governor of the State of Alaska, who serves as the state cochair; the President of the University of Alaska; the President of the Alaska Municipal League; the President of the Alaska Federation of Natives; the Executive President of the Alaska State AFL/CIO; and the President of the Associated General Contractors of Alaska. The Secretary of Commerce appoints the federal cochair from a list of nominations from the President pro temporare of the Senate and the Speaker of the House of Representatives. The federal cochair serves for 4 years and may be reappointed. Except for the federal cochair, members receive a basic rate of pay at Level IV of the Executive Schedule plus travel expenses for time spent on commission work. The commission must meet at least twice a year. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were 27 outstanding recommendations as of March 31, 2008. The EAC, established by the Help America Vote Act of 2002, serves as a national clearinghouse and resource for information and review of procedures with respect to the administration of federal elections. The commission has four members appointed by the President with the advice and consent of the Senate. The commission selects the chair and vice chair, who may not be from the same political party, from among its members. The chair and vice chair each serve 1-year terms and may only serve in that position once during each term of office. Members serve for 4 years and may only serve two terms. Each member is compensated at the annual rate of basic pay for Level IV of the Executive Schedule. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were 41 recommendations outstanding. The Equal Employment Opportunity Commission enforces laws that prohibit discrimination based on race, color, religion, sex, national origin, disability, or age in hiring, promoting, firing, setting wages, testing, training, and all other terms and conditions of employment. The commission has five members, no more than three of whom may be from the same political party. They are appointed by the President with the advice and consent of the Senate. The President also designates two of the members to be the chairman and vice chairman. The chairman runs the commission’s operations. Members serve for 5 years. None. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were 12 recommendations outstanding. The Farm Credit Administration is responsible for ensuring the safe and sound operation of the banks, associations, affiliated service organizations, and other entities that collectively comprise what is known as the Farm Credit System, and for protecting the interests of the public and those who borrow from Farm Credit institutions or invest in Farm Credit securities. The FCA board has three members appointed by the President with the advice and consent of the Senate. One member is designated by the President as the chairman and also serves as the CEO. Members serve for 6 years and may not be reappointed unless they were appointed to fill unexpired terms of 3 years or less. (1) GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were three outstanding recommendations. The Federal Communications Commission regulates interstate and foreign commerce in communications by radio, television, wire, satellite, and cable. It is responsible for the provision of rapid, efficient nationwide and worldwide communication services at reasonable rates. Its responsibilities also include the use of communications for promoting safety of life and property and for strengthening the national defense. Five commissioners are appointed by the President, with the advice and consent of the Senate for a term of 5 years. The President designates one commissioner to be chairman. Commissioners receive an annual rate of pay at Level IV of the Executive Schedule, with the chairman receiving Level III. The commission has the authority to appoint the officers and staff of the FCC and determine their compensation. Meetings of the commission must be held no less than once a month. No data. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were no outstanding recommendations. The FEC ensures the campaign finance process is fully disclosed and that laws regarding campaign finance are enforced. It also enforces the Federal Election Campaign Act (FECA) and oversees the Presidential public funding program. The commission is made up of six members, who are appointed by the President and confirmed by the Senate. Each member serves a single, 6- year term. By law, no more than three commissioners can be members of the same political party, and at least four votes are required for any official commission action. A new chairman is chosen each year from among the members, with no member serving as chairman more than once during his or her term. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were 51 outstanding recommendations. The Federal Housing Finance Board ensures the safety and soundness of the Federal Home Loan Banks, their access to the capital markets, and the fulfillment of their housing finance mission. Under the Housing and Economic Recovery Act (HERA) of 2008 (Pub. L. No. 110-289, 122 Stat. 2654 (July 30, 2008)), the FHFB will cease to exist 1 year after the effective date of HERA, or July 30, 2009, to be replaced by the Federal Housing Finance Agency (FHFA). HERA also amended the IG Act to require that the FHFA have an IG appointed by the President and confirmed by the Senate. The board is comprised of four members appointed by the President and confirmed by the Senate, who serve a 7-year term, and the Secretary of HUD. The President designates one of the board members as chairman. No more than three may be of the same political party and terms are staggered to end every other year. Members filling a vacancy serve only the remainder of the predecessor’s term. None. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were no outstanding recommendations. The Federal Labor Relations Authority oversees the federal service labor- management relations program. It administers the law that protects the right of employees of the federal government to organize, bargain collectively, and participate through labor organizations of their own choosing in decisions affecting them. The authority also ensures compliance with the statutory rights and obligations of federal employees and the labor organizations that represent them in their dealings with federal agencies. The authority is comprised of three board members who are appointed to 5-year terms by the President and confirmed by the Senate. No more than two may be from the same political party. The President designates one member to be chairman, who acts as chief executive and administrative officer of the authority. The chairman is compensated at Level III of the Executive Schedule and the other members are compensated at Level IV. None. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were 179 outstanding recommendations. The Federal Maritime Commission is responsible for regulating the waterborne foreign commerce of the United States. It ensures that U.S. ocean-borne trades are open to all on fair and equitable terms and protects against concerted activities and unlawful practices. The commission is comprised of five commissioners, who are appointed by the President and confirmed by the Senate to 5-year terms. The President designates one of the commissioners as chairman. No more than three may be members of the same political party. The chairman is the chief executive and administrative officer for the commission. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were no outstanding recommendations. The Federal Trade Commission enforces the laws that prohibit business practices that are deceptive or unfair to consumers; promotes informed consumer choice and public understanding of the competitive process; and seeks to accomplish its mission without impeding legitimate business activity. The commission is comprised of five commissioners, nominated by the President and confirmed by the Senate, each serving a 7-year term. The President chooses one commissioner to act as chairman. No more than three commissioners can be of the same political party. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were three outstanding recommendations. The Legal Services Corporation’s mission is to promote equal access to justice and to provide high-quality civil legal assistance to low-income persons. The board has 11 members appointed by the President and confirmed by the Senate for 3-year terms. The board elects a chairman annually from among its members and appoints the president of the corporation. The board must meet at least four times per year. Audit; Finance; Governance and Performance Review; Operations and Regulations; Provision for the Delivery of Legal Services. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were no outstanding recommendations. The National Archives and Records Administration safeguards and preserves the records of our government, ensuring that the people can discover, use, and learn from this documentary heritage; establishes policies and procedures for managing U.S. government records; manages the Presidential Libraries system; and publishes the laws, regulations, and presidential and other public documents. The Archivist is appointed by the President and confirmed by the Senate. There is no set term of office. The Archivist chooses the Deputy Archivist. Not applicable. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were no outstanding recommendations. The National Credit Union Administration is responsible for chartering, insuring, and supervising federal credit unions and administering the National Credit Union Share Insurance Fund. The administration also administers the Community Development Revolving Loan Fund and manages the Central Liquidity Facility, a mixed-ownership government corporation that supplies emergency loans to member credit unions. The management of NCUA is vested in a full-time, three-member board appointed by the President and confirmed by the Senate. No more than two board members can be from the same political party, and each member serves a staggered 6-year term. The NCUA board normally meets monthly, except August. (571) GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were no outstanding recommendations. The National Endowment for the Arts, established by Congress in 1965 as an independent federal agency, is the official arts organization of the United States government. It is dedicated to supporting excellence in the arts, both new and established; bringing the arts to all Americans; and providing leadership in arts education. The NEA is headed by a chairperson appointed by the President and confirmed by the Senate. The chairperson serves for 4 years and may be reappointed or serve until a successor is appointed. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were no outstanding recommendations. The National Endowment for the Humanities is an independent federal agency established by Congress in 1965 to support research, education, preservation, and public programs in the humanities. NEH is directed by a chairperson, who is appointed by the President and confirmed by the U.S. Senate, for a term of 4 years. The chairperson is eligible for reappointment and may continue to serve until a successor has been appointed and qualified. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were no outstanding recommendations. The National Labor Relations Board is vested with the power to prevent and remedy unfair labor practices committed by private sector employers and unions and to safeguard employees’ rights to organize and determine whether to have unions as their bargaining representative. The chairman and four board members are selected by the President and confirmed by the Senate. Board members serve staggered 5-year terms. The President designates one member to serve as chairman of the board. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were 20 outstanding recommendations. The National Science Foundation promotes the progress of science and engineering through the support of research and education programs. The National Science Board (NSB) is made up of 24 members appointed by the President and confirmed by the Senate, and the NSF director is an ex officio member. Members serve 6-year terms; one-third of the board is appointed every 2 years. NSB members are drawn from industry and universities, and represent a variety of science and engineering disciplines and geographic areas. The NSB meets about six times a year. It reviews and approves major NSF awards and new programs and initiates and conducts studies and reports on a broad range of policy topics. The NSB also publishes occasional policy papers or statements on issues of importance to U.S. science and engineering. Audit and Oversight; Strategy and Budget; Programs and Plans; Education and Human Resources; and Executive. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were 76 outstanding recommendations. The mission of the Peace Corps is to help the people of interested countries in meeting their need for trained men and women, and to help promote better mutual understanding between Americans and citizens of other countries. The director and deputy director are appointed by the President and confirmed by the Senate. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were 113 outstanding recommendations. The Pension Benefit Guaranty Corporation provides for timely and uninterrupted pension benefits payments to participants and beneficiaries of voluntary private pension plans. PBGC is administered by a director who reports to a board of directors, which consists of the Secretaries of Labor, Commerce, and Treasury. The Secretary of Labor is chairman of the board and calls meetings. Members serve without compensation, but are reimbursed for expenses incurred during board business. The corporation is aided by a seven-member Advisory Committee appointed by the President. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were 113 outstanding recommendations. The Postal Regulatory Commission oversees the Market Dominant and Competitive Products of the U.S. Postal Service, adjusts as necessary lists of these products, and reviews related complaints. The commission is composed of five commissioners, each of whom is appointed by the President, with the advice and by consent of the Senate, for a term of 6 years. The Chairman is designated by the President. A commissioner may continue to serve after the expiration of his or her term for up to 1 year. No more than three members of the commission may be members of the same political party. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were no outstanding recommendations. The Securities and Exchange Commission administers federal securities laws that seek to provide protection for investors; to ensure that securities markets are fair; and, when necessary, to provide the means to enforce securities laws through sanctions. The SEC consists of five commissioners appointed by the President and confirmed by the Senate, with staggered 5-year terms. One of them is designated by the President as chairman of the commission—the agency’s chief executive. No more than three of the commissioners may belong to the same political party. (531) GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were 59 outstanding recommendations. The Smithsonian Institution is an independent trust instrumentality of the United States which comprises an extensive museum and research complex. It is dedicated to the increase and diffusion of knowledge. The Board of Regents has 17 members, including the Vice President, the Chief Justice of the United States, 3 members of the U.S. Senate, and 3 members of the House of Representatives. Nine other persons other than members of Congress, 2 of whom must be Washington D.C. residents and 7 from U.S. states, make up the remainder. House members serve for 2 years, the Senate members serve their term as Senators, and the other 9 members serve for 6 years. The board elects its own chancellor, who is the presiding officer of the Board of Regents. The board also elects the Secretary of the institution and three board members as an executive committee. At least 8 members must be present for the meeting to have a quorum. Members are paid travel expenses to attend meetings but their service is otherwise gratuitous. Audit and Review; Executive; Compensation and Human Resources; Facilities; Finance; Investment; Governance and Nominating; Advancement; and Strategic Planning and Programs. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were 61 outstanding recommendations. The United States International Trade Commission administers U.S. trade remedy laws within its mandate; provides the President, the United States Trade Representative, and Congress with analysis, information, and support on matters of tariffs and international trade and competitiveness; and maintains the Harmonized Tariff Schedule of the United States. The USITC is headed by six commissioners who are nominated by the President and confirmed by the U.S. Senate. No more than three commissioners may be members of the same political party. The commissioners serve overlapping terms of 9 years each, with a new term beginning every 18 months. The chairman and vice chairman are designated by the President from among the current commissioners for 2- year terms. The chairman and vice chairman must be from different political parties, and the chairman cannot be from the same political party as the preceding chairman. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There was one outstanding recommendation. The mission of the USPS is to provide the nation with reliable, affordable, universal mail service. The Board of Governors of USPS is composed of 11 members. It includes nine governors who are appointed by the President and confirmed by the Senate and the Postmaster General and the Deputy Postmaster General. The nine governors select the Postmaster General, who becomes a member of the board, and those 10 select the Deputy Postmaster General, who also serves on the board. The Postmaster General serves at the pleasure of the governors for an indefinite term. The Deputy Postmaster General serves at the pleasure of the governors and the Postmaster General. Members of the Board of Governors serve for 7 years. Each governor receives $300 per day for not more than 42 days of meetings each year and travel expenses, in addition to an annual salary of $30,000. Audit and Finance; Compensation and Management Resources; Government Relations and Regulatory; Governance and Strategic Planning; and Ad Hoc Committee on Operations. GAO sent a data request to the general counsel of each DFE asking about the applicability of 12 governance and accountability statutes to their entity. The table below reflects the response of the general counsel. We did not independently analyze the applicability of these statutes to each entity. There were 212 outstanding recommendations. Jeanette Franzel (202) 512-2600 or franzelj@gao.gov. In addition to the person named above, major contributors to this report were Kimberly McGatlin (Assistant Director), Lisa Crye, Francis Dymond, Joel Grossman, Jacquelyn Hamilton, Maxine Hattery, Jennifer Henderson, Jack Hufnagle, Chelsea Lounsbury, and Tory Wudtke. Inspectors General: Independent Oversight of Financial Regulatory Agencies. GAO-09-524T. Washington, D.C.: March 25, 2009. Inspectors General: Actions Needed to Improve Audit Coverage of NASA. GAO-09-88. Washington, D.C.: December 18, 2008. Legal Services Corporation: Improvements Needed in Governance, Accountability, and Grants Management and Oversight. GAO-08-833T. Washington, D.C.: May 22, 2008. Smithsonian Institution: Board of Regents Has Implemented Many Governance Reforms, but Ensuring Accountability and Oversight Will Require Ongoing Action. GAO-08-632 . Washington, D.C.: May 15, 2008. Federal Oversight: The Need for Good Governance, Transparency, and Accountability. GAO-07-788CG. Washington, D.C.: April 16, 2007. Smithsonian Institution: Status of Efforts to Address a Range of Funding and Governance Challenges. GAO-08-250T. Washington, D.C.: December 12, 2007. Inspectors General: Limitations of IG Oversight at the Department of State. GAO-08-135T. . Washington, D.C.: October 31, 2007. Legal Services Corporation: Governance and Accountability Practices Need to Be Modernized and Strengthened. GAO-07-993. Washington, D.C.: August 15, 2007. Pension Benefit Guaranty Corporation: Governance Structure Needs Improvements to Ensure Policy Direction and Oversight. GAO-07-808. Washington, D.C.: July 6, 2007. Inspectors General: Proposals to Strengthen Independence and Accountability. GAO-07-1021T. Washington, D.C.: June 20, 2007. Inspectors General: Activities of the Department of State Office of Inspector General. GAO-07-138. Washington, D.C.: March 23, 2007. Corporate Governance: NCUA’s Controls and Related Procedures for Board Independence and Objectivity Are Similiar to Other Financial Regulators, but Opportunities Exist to Enhance Its Governance Structure. GAO-07-72R. Washington, D.C.: November 30, 2006. Suggested Areas for Oversight for the 110th Congress. GAO-07-235R. Washington, D.C.: November 17, 2006. Intercity Passenger Rail: National Policy and Strategies Needed to Maximize Public Benefits from Federal Expenditures. GAO-07-15. Washington, D.C.: November 13, 2006. Highlights of the Comptroller General’s Panel on Federal Oversight and the Inspectors General. GAO-06-931SP. Washington, D.C.: September 11, 2006. United Nations: Funding Arrangements Impede Independence of Internal Auditors. GAO-06-575. Washington, D.C.: April 25, 2006. Activities of the Treasury Inspector General for Tax Administration. GAO-05-999R. Washington, D.C.: September 27, 2005. Amtrak: Management and Accountability Issues Contribute to Unprofitability of Food and Beverage Service. GAO-05-761T. Washington, D.C.: June 9, 2005. Kennedy Center: Stronger Oversight of Fire Safety Issues, Construction Projects, and Financial Management Needed. GAO-05-334. Washington, D.C.: April 22, 2005. Tax-Exempt Sector: Governance, Transparency, and Oversight Are Critical for Maintaining Public Trust. GAO-05-561T. Washington, D.C.: April 20, 2005. Activities of the Amtrak Inspector General. GAO-05-306R. Washington, D.C.: March 4, 2005. Inspectors General: Enhancing Federal Accountability. GAO-04-117T. Washington, D.C.: October 8, 2003. Department of Health and Human Services: Review of the Management of Inspector General Operations. GAO-03-685. Washington, D.C.: June 10, 2003. Inspectors General: Office Consolidation and Related Issues. GAO-02-575. Washington, D.C.: August 15, 2002. Inspectors General: Comparison of Ways Law Enforcement Authority Is Granted. GAO-02-437. Washington, D.C.: May 22, 2002. Inspectors General: Department of Defense IG Peer Reviews. GAO-02-253R. Washington, D.C.: December 20, 2001. U.S. Export-Import Bank: Views on Inspector General Oversight. GAO-01-1038R. Washington, D.C.: September 6, 200l. HUD Inspector General: Actions Needed to Strengthen Management and Oversight of Operation Safe Home. GAO-01-794. Washington, D.C.: June 29, 2001.
For entities that rely on others for funding, effective governance, accountability, and internal control are keys to maintaining trust and credibility. In recent years, corporate governance and accountability have received increased scrutiny and emphasis in the nonprofit, federal government, and public company sectors. Governance and accountability problems have also been identified at designated federal entities (DFE) such as the Smithsonian Institution, the Legal Services Corporation, and the Pension Benefit Guaranty Corporation. This report responds to a congressional request that GAO describe (1) the statutory structure of the governing bodies for each DFE organization and (2) the role of the inspectors general (IGs) in the governance structure. To accomplish this, GAO surveyed the DFE heads and IGs on governance issues and reviewed information from a variety of sources, including the IG Act and subsequent amendments; enabling legislation for the DFEs; and legislative and regulatory standards and requirements for financial reporting and internal control. GAO is not making specific recommendations in this report, but is providing this information for consideration in future efforts to update the governance of DFEs, oversee the entities and their IGs, and continue work to improve the effectiveness of government. GAO received technical comments, which were incorporated as appropriate. The DFEs vary in structure and requirements for governance. At the time of GAO's review, the designated size of the governing bodies of the 31 DFEs ranged from 1 to 24 members. Fifteen had at least one vacancy and 2 had more vacancies than sitting members. The frequency of DFE multimember governing-body meetings ranged from daily to rarely or not at all. In GAO's survey of DFEs, 13 indicated that they are required to comply with 12 key statutes that cover management and accountability. The remaining 18 reported varying requirements, with one not subject to any of the statutes. Only 7 DFE governing bodies have a structure that includes governance or oversight committees for ensuring oversight of management decisions, results of operations, and emerging risks. While 19 DFEs reported having orientation programs for new governing body members, only 9 reported ongoing training. IG effectiveness is influenced by an entity's governance structure and practices. Within DFEs, IGs vary in their role and relationship with management. IGs are charged with preventing and detecting fraud and abuse; conducting audits and investigations; and recommending policies to promote economy, efficiency, and effectiveness. To accomplish these objectives, IGs must be able to establish and maintain independence; have control of their resources to plan and perform work; recruit, retain, and manage sufficient professional staff; and be able to resolve audit and investigation recommendations. GAO's survey of IGs showed that most report to the highest levels in their entities, a legislative requirement that is a key element of independence. At the same time, the IGs had limited control over their resources, and their budgets and staffing were not always adequate to perform needed audits and investigations. Only 3 IGs had the transparency that a separate line item in their entity's budget provides, and 8 needed management approval for spending. Audit resolution varied, with some IGs reporting a lack of entity responsiveness to recommendations. Only 10 DFEs reported that their governing bodies have written policies for monitoring the implementation of IG recommendations. Nine of the 10 have policies that require the governing body to respond in writing and to develop a plan to address recommendations. During the course of GAO's work, Congress passed and the President signed into law on October 14, 2008, the Inspector General Reform Act of 2008, which was intended to enhance IG independence. Its implementation may mitigate some of the issues GAO found.
DOD and VA offer health care benefits to active duty servicemembers and veterans, among others. Under DOD’s health care system, eligible beneficiaries may receive care from military treatment facilities or from civilian providers. Military treatment facilities are individually managed by each of the military services—the Army, the Navy, and the Air Force. Under VA, eligible beneficiaries may obtain care through VA’s integrated health care system of hospitals, ambulatory clinics, nursing homes, residential rehabilitation treatment programs, and readjustment counseling centers. VA has organized its health care facilities into a polytrauma system of care that helps address the medical needs of returning servicemembers and veterans, in particular those who have an injury to more than one part of the body or organ system that results in functional disability and physical, cognitive, psychosocial, or psychological impairment. Persons with polytraumatic injuries may have injuries or conditions such as TBI, amputations, fractures, and burns. Over the past 6 years, DOD has designated over 30,000 servicemembers involved in Operations Iraqi Freedom and Enduring Freedom as wounded in action. Servicemembers injured in these conflicts are surviving injuries that would have been fatal in past conflicts, due, in part, to advanced protective equipment and medical treatment. The severity of their injuries can result in a lengthy transition from patient back to duty, or to veteran status. Initially, most seriously injured servicemembers from these conflicts, including activated National Guard and Reserve members, are evacuated to Landstuhl Regional Medical Center in Germany for treatment. From there, they are usually transported to military treatment facilities in the United States, with most of the seriously injured admitted to Walter Reed Army Medical Center or the National Naval Medical Center. According to DOD officials, once they are stabilized and discharged from the hospital, servicemembers may relocate closer to their homes or military bases and are treated as outpatients by the closest military or VA facility. As part of the Army’s Medical Action Plan, the Army has developed a new organizational structure—Warrior Transition Units—for providing an integrated continuum of care for servicemembers who generally require at least 6 months of treatment, among other factors. Within each unit, the servicemember is assigned to a team of three key staff and this team is responsible for overseeing the continuum of care for the servicemember. The Army refers to this team as a “Triad,” which consists of a (1) primary care manager—usually a physician who provides primary oversight and continuity of health care and ensures the quality of the servicemember’s care; (2) nurse case manager—usually a registered nurse who plans, implements, coordinates, monitors, and evaluates options and services to meet the servicemember’s needs; and (3) squad leader—a noncommissioned officer who links the servicemember to the chain of command, builds a relationship with the servicemember, and works along side the other parts of the Triad to ensure the needs of the servicemember and his or her family are met. The Army established 32 Warrior Transition Units, to provide a unit in every medical treatment facility that has 35 or more eligible servicemembers. The Army’s goal is to fill the Triad positions according to the following ratios: 1:200 for primary care managers; 1:18 for nurse case managers at Army medical centers that normally see servicemembers with more acute conditions and 1:36 for other types of Army medical treatment facilities; and 1:12 for squad leaders. Returning injured servicemembers must potentially navigate two different disability evaluation systems that generally rely on the same criteria but for different purposes. DOD’s system serves a personnel management purpose by identifying servicemembers who are no longer medically fit for duty. The military’s process starts with identification of a medical condition that could render the servicemember unfit for duty, a process that could take months to complete. The servicemember is evaluated by a medical evaluation board (MEB) to identify any medical conditions that may render the servicemember unfit. The member is then evaluated by a physical evaluation board (PEB) to make a determination of fitness or unfitness for duty. If found unfit, and the unfit conditions were incurred in the line of duty, the PEB assigns the servicemember a combined percentage rating for those unfit conditions using VA’s rating system as a guideline, and the servicemember is discharged from duty. This disability rating, along with years of service and other factors, determines subsequent disability and health care benefits from DOD. For servicemembers meeting the minimum rating and years of duty thresholds, monthly disability retirement payments are provided; for those not meeting these thresholds, a lump-sum severance payment is provided. As servicemembers in the Army navigate DOD’s disability evaluation system, they interface with staff who play a key role in supporting them through the process. MEB physicians play a fundamental role as they are responsible for documenting the medical conditions of servicemembers for the disability evaluation case file. In addition, MEB physicians may require that servicemembers obtain additional medical evidence from specialty physicians such as a psychiatrist. Throughout the MEB and PEB process, a physical evaluation board liaison officer serves a key role by explaining the process to servicemembers, and ensuring that the servicemembers’ case files are complete before they are forwarded for adjudication. The board liaison officer informs servicemembers of board results and of deadlines at key decision points in the process. The military also provides legal counsel to servicemembers in the disability evaluation process. The Army, for example, provides them with legal representation at formal board hearings. The Army will provide military counsel, or servicemembers may retain their own representative at their own expense. In addition to receiving benefits from DOD, veterans may receive compensation from VA for lost earning capacity due to service-connected disabilities. Although a servicemember may file a VA claim while still in the military, he or she can only obtain disability compensation from VA as a veteran. VA will evaluate all claimed conditions, whether they were evaluated previously by the military service’s evaluation process or not. If the VA finds that a veteran has one or more service-connected disabilities with a combined rating of at least 10 percent, VA will pay monthly compensation. The veteran can claim additional benefits over time, for example, if a service-connected disability worsens. To improve the timeliness and resource utilization of DOD’s and VA’s separate disability evaluation systems, the agencies embarked on a planning effort of a joint disability evaluation system that would enable servicemembers to receive VA disability benefits shortly after leaving the military without going through both DOD’s and VA’s processes. A key part of this planning effort included a “table top” exercise whereby the planners simulated the outcomes of cases using four potential options that incorporated variations of following three elements: (1) a single, comprehensive medical examination to be used by both DOD and VA in their disability evaluations; (2) a single disability rating performed by VA; and (3) incorporating a DOD-level evaluation board for adjudicating servicemembers’ fitness for duty. Based on the results of this exercise, DOD and VA implemented the selected pilot design using live cases at three Washington, D.C.-area military treatment facilities including Walter Reed Army Medical Center in November 2007. Key features of the pilot include (see fig. 1): a single physical examination conducted to VA standards as part of the disability ratings prepared by VA, for use by both DOD and VA in determining disability benefits; and additional outreach and non-clinical case management provided by VA staff at the DOD pilot locations to explain VA results and processes to servicemembers. The Army has made strides increasing key staff positions in support of servicemembers undergoing medical treatment as well as disability evaluation, but faces a number of challenges to achieving or maintaining stated goals. Although the Army has made significant progress in staffing its Warrior Transition Units, several challenges remain, including hiring medical staff in a competitive market, replacing temporarily borrowed personnel with permanent staff, and getting eligible servicemembers into the units. With respect to supporting servicemembers as they navigate the disability evaluation process, the Army has reduced caseloads of key support staff, but has not yet reached its goals and faces challenges with both hiring and meeting current demands of servicemembers in the process. Since September 2007, the Army has made considerable progress in staffing its Warrior Transition Units, increasing the number of staff assigned to Triad positions by almost 75 percent. As of February 6, 2008, the Army had about 2,300 personnel staffing its Warrior Transition Units. In February 2008, the Army reported that its Warrior Transition Units had achieved “full operational capability,” which was the goal established in the Army’s Medical Action Plan. The Warrior Transition Units reported that they had met this goal even though some units had staffing shortages or faced other challenges. The Army’s January 2008 assessment defined full operational capability across a wide variety of areas identified in the Army’s Medical Action Plan, not just personnel fill. For example, the assessment included whether facilities and barracks were suitable and whether a Soldier and Family Assistance Center was in place and providing essential services. In addition, the commander assessed whether the unit could conduct the mission- essential tasks assigned to it. As a result, such ratings have both objective and subjective elements, and the Army allows commanders to change the ratings based on their judgment. Location (size of Warrior Transition Unit population) Fort Hood, Texas (957) Walter Reed Army Medical Center, Washington, D.C. (674) Fort Lewis, Washington (613) Fort Campbell, Kentucky (596) Fort Drum, New York (395) Fort Polk, Louisiana (248) Fort Knox, Kentucky (243) Fort Irwin & Balboa, California (89) Fort Belvoir, Virginia (43) Fort Huachuca, Arizona (41) Redstone Arsenal, Alabama (17) The Army is confronting other challenges, as well, including replacing borrowed staff in Triad positions with permanently assigned staff without disrupting the continuity of care for servicemembers. We previously reported in September 2007 that many units were relying on borrowed staff to fill positions—about 20 percent overall. This practice has continued; in February 2008, about 20 percent of Warrior Transition Unit staff continued to be borrowed from other positions. Army officials told us that using borrowed staff was necessary to get the Warrior Transition Units implemented quickly and has been essential in staffing units that have experienced sudden increases in servicemembers needing care. Army officials told us that using borrowed staff is a temporary solution for staffing the units, and these staff will be transitioned out of the positions when permanent staff are available. Replacing the temporary staff will result in turnover among Warrior Transition Unit staff, which can disrupt the continuity of care provided to servicemembers. Another lingering challenge facing the Army is getting eligible servicemembers into the Warrior Transition Units. In developing its approach, the Army envisioned that servicemembers meeting specific criteria, such as requiring more than 6 months of treatment or having a condition that requires going through the Medical Evaluation Board process, would be assigned to the Warrior Transition Units. Since September 2007, the Warrior Transition Unit population has increased by about 80 percent—from about 4,350 to about 7,900 servicemembers. However, although the percentage of eligible servicemembers going through the Medical Evaluation Board process who were not in a Warrior Transition Unit has been cut almost in half since September 2007, more than 2,500 eligible servicemembers were not in units, as of February 6, 2008. About 1,700 of these servicemembers (about 70 percent) are concentrated in ten locations. (See table 2.) Warrior Transition Unit commanders conduct risk assessments of eligible servicemembers to determine if their care can be appropriately managed outside of the Warrior Transition Unit. These assessments are to be conducted within 30 days of determining that the servicemember meets eligibility criteria. For example, a servicemember’s knee injury may require a Medical Evaluation Board review—a criterion for being placed in a Warrior Transition Unit—but the person’s unit commander can determine that the person can perform a desk job while undergoing the medical evaluation process. According to Army guidance, servicemembers eligible for the Warrior Transition Unit will generally be moved into the units, that it will be the exception, not the rule, for a servicemember to not be transferred to a Warrior Transition Unit. Army officials told us that the population of 2,500 servicemembers who had not been moved into a Warrior Transition Unit consisted of both servicemembers who had just recently been identified as eligible for a unit but had not yet been evaluated and servicemembers whose risk assessment determined that their care could be managed outside of a unit. Officials told us that servicemembers who needed their care managed more intensively through Warrior Transition Units had been identified through the risk assessment process and had been moved into such units. As eligible personnel are brought into the Warrior Transition Units, however, it could exacerbate staffing shortfalls in some units. To minimize future staffing shortfalls, Army officials told us that they are identifying areas where they anticipate future increases in the number of servicemembers needing care in a Warrior Transition Unit and would use this information to determine appropriate future staffing needs of the units. Another emerging challenge is gathering reliable and objective data to measure progress. A central goal of the Army’s efforts is to make the system more servicemember- and family-focused and the Army has initiated efforts to determine how well the units are meeting servicemembers’ needs. To its credit, the Army has developed a wide range of methods to monitor its units, among them a program to place independent ombudsmen throughout the system as well as town hall meetings and a telephone hotline for servicemembers to convey concerns about the Warrior Transition Units. Additionally, through its Warrior Transition Program Satisfaction Survey, the Army has been gathering and analyzing information on servicemembers’ opinions about their nurse case manager and the overall Warrior Transition Unit. However, initial response rates have been low, which has limited the Army’s ability to reliably assess satisfaction. In February 2008, the Army started following up with nonrespondents, and officials told us that these efforts have begun to improve response rates. To obtain feedback from a larger percentage of servicemembers in the Warrior Transition Units, the Army administered another satisfaction survey in January 2008. This survey, which also solicited servicemembers’ opinions about components of the Triad and overall satisfaction with the Warrior Transition Units, garnered a more than 90 percent response rate from the population surveyed. While responses to the survey were largely positive, the survey is limited in its ability to accurately gauge the Army’s progress in improving servicemember satisfaction with the Warrior Transition Unit, because it was not intended to be a methodologically rigorous evaluation. For example, the units were not given specific instructions on how to administer the survey, and as a result, it is not clear the extent to which servicemembers were provided anonymity in responding to the survey. Units were instructed to reach as many servicemembers as possible within a 24-hour period in order to provide the Army with immediate feedback on servicemembers’ overall impressions of the care they were receiving. Injured and ill servicemembers who must undergo a fitness for duty assessment and disability evaluation rely on the expertise and support of several key staff—board liaisons, legal personnel, and board physicians— to help them navigate the process. Board liaisons explain the disability process to servicemembers and are responsible for ensuring that their disability case files are complete. Legal staff and medical evaluation board physicians can substantially influence the outcome of servicemembers’ disability evaluations because legal personnel provide important counsel to servicemembers during the disability evaluation process, and evaluation board physicians evaluate and document servicemembers’ medical conditions for the disability evaluation case file. With respect to board liaisons, the Army has expanded hiring efforts and met its goals for reducing caseloads at most treatment facilities, but not at some of the facilities with the most servicemembers in the process. In August 2007, the Army established an average caseload target of 30 servicemembers per board liaison. As of February 2008, the Army had expanded the number of board liaisons by about 22 percent. According to the Army, average caseloads per liaison have declined from 54 servicemembers at the end of June 2007 to 46 at the end of December 2007. However, 11 of 35 treatment facilities continue to have shortages of board liaisons and about half of all servicemembers in the disability evaluation process are located at these 11 treatment facilities. (See fig. 2.) Due to their caseloads, liaisons we spoke with at one location had difficulty making appointments with servicemembers, which has challenged their ability to provide timely and comprehensive support. The Army plans to hire additional board liaisons, but faces challenges in keeping up with increased demand. According to an Army official responsible for staff planning, the Army reviews the number of liaisons at each treatment facility weekly and reviews Army policy for the target number of servicemembers per liaison every 90 days. The official also identified several challenges in keeping up with increased demand for board liaisons, including the increase in the number of injured and ill servicemembers in the medical evaluation board process overall, and the difficulty of attracting and retaining liaisons at some locations. According to Army data, the total number of servicemembers completing the medical evaluation board process increased about 19 percent from the end of 2006 to the end of 2007. In addition to gaps in board liaisons, according to Army documents, staffing of dedicated legal personnel who provide counsel to injured and ill servicemembers throughout the disability evaluation processes is currently insufficient. Ideally, according to the Army, servicemembers should receive legal assistance during both the medical and physical evaluation board processes. While servicemembers may seek legal assistance at any time, the Office of the Judge Advocate General’s policy is to assign dedicated legal staff to servicemembers when their case goes before a formal physical evaluation board. In June 2007, the Army assigned 18 additional legal staff—12 Reserve attorneys and 6 Reserve paralegals— to help meet increasing demands for legal support throughout the process. As of January 2008, the Army had 27 legal personnel—20 attorneys and 7 paralegals—located at 5 of 35 Army treatment facilities who were dedicated to supporting servicemembers primarily with the physical evaluation board process. However, the Office of the Judge Advocate General has acknowledged that these current levels are insufficient for providing support during the medical evaluation board process, and proposed hiring an additional 57 attorneys and paralegals to provide legal support to servicemembers during the medical evaluation board process. The proposed 57 attorneys and paralegals include 19 active-duty military attorneys, 19 civilian attorneys, and 19 civilian paralegals. On February 21, 2008, Army officials told us that 30 civilian positions were approved, consisting of 15 attorneys and 15 paralegals. While the Army has plans to address gaps in legal support for servicemembers, challenges with hiring and staff turnover could limit their efforts. According to Army officials, even if the plan to hire additional personnel is approved soon, hiring of civilian attorneys and paralegals may be slow due to the time it takes to hire qualified individuals under government policies. Additionally, 19 of the 57 Army attorneys who would be staffed under the plan would likely only serve in their positions for a period of 12 to 18 months. According to a Disabled American Veterans representative with extensive experience counseling servicemembers during the evaluation process, frequent rotations and turnover of Army attorneys working on disability cases limits their effectiveness in representing servicemembers due to the complexity of disability evaluation regulations. With respect to medical evaluation board physicians, who are responsible for documenting servicemembers medical conditions, the Army has mostly met its goal for the average number of servicemembers per physician at each treatment facility. In August 2007, the Army established a goal of one medical evaluation board physician for every 200 servicemembers. As with the staffing ratio for board liaisons, the ratio for physicians is reviewed every 90 days by the Army and the ratio at each treatment facility is reviewed weekly, according to an Army official. As of February 2008, the Army had met the goal of 200 servicemembers per physician at 29 of 35 treatment facilities and almost met the goal at two others. Despite having mostly met its goal for medical evaluation board physicians, according to Army officials, the Army continues to face challenges in this area. For example, according to an Army official, physicians are having difficulty managing their caseload even at locations where they have met or are close to the Army’s goal of 1 physician for 200 servicemembers due not only to the volume of cases but also their complexity. According to Army officials, disability cases often involve multiple conditions and may include complex conditions such as TBI and PTSD. Some Army physicians told us that the ratio of servicemembers per physician allows little buffer when there is a surge in caseloads at a treatment facility. For this reason, some physicians told us that the Army could provide better service to servicemembers if the number of servicemembers per physician was reduced from 200 to 100 or 150. In addition to increasing the number of staff who support this process, the Army has reported other progress and efforts underway that could further ease the disability evaluation process. For example, the Army has reported improving outreach to servicemembers by establishing and conducting standardized briefings about the process. The Army has also improved guidance to servicemembers by developing and issuing a handbook on the disability evaluation process, and creating a web site for each servicemember to track his or her progress through the medical evaluation board. Finally, the Army told us that efforts are underway to further streamline the process for servicemembers and improve supporting information technology. For example, the Army established a goal to eliminate 50 percent of the forms required by the current process. While we are still assessing the scope, status, and potential impact of these efforts, a few questions have been raised about some of them. For example, according to Army officials, servicemembers’ usage of the medical evaluation board web site has been low. In addition, some servicemembers with whom we spoke believe the information presented on the web site was not helpful in meeting their needs. One measure of how well the disability evaluation system is working does not indicate that improvements have occurred. The Army collects data and regularly reports on the timeliness of the medical evaluation board process. While we have previously reported that the Army has few internal controls to ensure that these data were complete and accurate, the Army recently told us that they are taking steps to improve the reliability of these data. We have not yet substantiated these assertions. Assuming current data are reliable, the Army has reported not meeting a key target for medical evaluation board timeliness and has even reported a negative trend in the last year. Specifically, the Army’s target is for 80 percent of the medical evaluation board cases to be completed in 90 days or less, but the percent that met the standard declined from 70 percent in October through December 2006, to 63 percent in October through December 2007. Another potential indicator of how well the disability evaluation process is working is under development. Since June 2007, the Army has used the Warrior Transition Program Satisfaction Survey to ask servicemembers about their experience with the disability evaluation process and board liaisons. However, according to Army officials in charge of the survey, response rates to survey questions related to the disability process were particularly low because most surveyed servicemembers had not yet begun the disability evaluation process. The Army is in the process of developing satisfaction surveys that are separate from the Warrior Transition Unit survey to gauge servicemembers’ perceptions of the medical and physical evaluation board processes. DOD and VA have joined together to quickly pilot a streamlined disability evaluation process, but evaluation plans currently lack key elements. In August 2007, DOD and VA conducted an intensive 5-day “table top” exercise to evaluate the relative merits of four potential pilot alternatives. Though the exercise yielded data quickly, there were trade-offs in the nature and extent of data that could be obtained in that time frame. In November 2007, DOD and VA jointly initiated a 1-year pilot in the Washington, D.C. area using live cases, although DOD and VA officials told us they may consider expanding the pilot to other locations beyond the current sites around July 2008. However, pilot results may be limited at that and other critical junctures, and pilot evaluation plans currently lack key elements, such as criteria for expanding the pilot. Prior to implementing the pilot in November 2007, the agencies conducted a 5-day “table top” exercise that involved a simulation of cases intended to test the relative merits of 4 pilot options. All the alternatives included a single VA rating to be used by both agencies. However, the exercise was designed to evaluate the relative merits of certain other key features, such as whether DOD or VA should conduct a single physical examination, and whether there should be a DOD-wide disability evaluation board, and if so, what its role would be. Ultimately, the exercise included four pilot alternatives involving different combinations of these features. Table 3 summarizes the pilot alternatives. The simulation exercise was formal in that it followed a pre-determined methodology and comprehensive in that it involved a number of stakeholders and captured a broad range of metrics. DOD and VA were assisted by consultants who provided data collection, analysis, and methodological support. The pre-determined methodology involved examining previously decided cases, to see how they would have been processed through each of the four pilot alternatives. The 33 selected cases intentionally reflected decisions originating from each of the military services and a broad range and number of medical conditions. Participants in the simulation exercise included officials from DOD, each military service, and VA who are involved in all aspects of the disability evaluation processes at both agencies. Metrics collected included case outcomes including the fitness decision, the DOD and VA ratings, and the median expected days to process cases. These outcomes were compared for each pilot alternative with actual outcomes. In addition, participants rank ordered their preference for each pilot alternative, and provided feedback on expected servicemember satisfaction as well as service and organization acceptance. They also provided their views on legislative and regulatory changes and resource requirements to implement alternative processes, and identified advantages and disadvantages of each alternative. This table top exercise enabled DOD and VA to obtain sufficient information to support a near-term decision to implement the pilot, but it also required some trade-offs. For example, the intensity of the exercise— simulating four pilot alternatives, involving more than 40 participants over a 5-day period—resulted in an examination of only a manageable number of cases. To ensure that the cases represented each military service and different numbers and types of potential medical conditions, a total of 33 cases were judgmentally selected by service: 8 Army, 9 Navy, 8 Marine, and 8 Air Force. However, the sample used in the simulation exercise was not statistically representative of each military service’s workload; as such it is possible that a larger and more representative sample could have yielded different outcomes. Also, expected servicemember satisfaction was based on the input of the DOD and VA officials participating in the pilot rather than actual input from the servicemembers themselves. Based on the data from this exercise, the Senior Oversight Committee gave approval in October 2007 to proceed with piloting an alternative process with features that scored the highest in terms of participants’ preferential voting and projected servicemember satisfaction. These elements included a single VA rating (as provided in all the alternatives tested) and a comprehensive medical examination conducted by VA. The selected pilot design did not include a DOD-wide disability evaluation board. Rather, the services’ physical evaluation boards would continue to determine fitness for duty, as called for under Alternative 2. DOD and VA officials have described to us a plan for expanding the pilot that is geared toward quick implementation, but may have limited pilot results available to them at a key juncture. With respect to time frames, the pilot, which began in November 2007, is scheduled to last 1 year, through November 2008. However, prior to that date, planners have expressed interest in expanding the pilot outside the Washington metropolitan area. Pilot planners have told us that around July 2008— which is not long after the first report on the pilot is due to Congress— they may ask the Senior Oversight Committee to decide on expansion to more locations based on data available at that time. They suggested that a few additional locations would allow them to collect additional experience and data outside the Washington, D.C. area before decisions on broader expansion are made. According to DOD and VA officials, time frames for national expansion have not yet been decided. However, DOD also faces deadlines for providing Congress an interim report on the pilot’s status as early as October 2008, and for issuing a final report. While expanding the pilot outside the Washington, D.C. area will likely yield useful information to pilot planners, due to the time needed to fully process cases, planners may have limited pilot results available to guide their decision making. As of February 17, 2008, 181 cases were currently in the pilot process, but none had completed the process. After conducting the simulation exercise, pilot planners set a goal of 275 days (about 9 months) for a case to go through the entire joint disability evaluation process. If the goal is an accurate predictor of time frames, potentially very few cases will have made it through the entire pilot process by the time planners seek to expand the pilot beyond the Washington area. As a result, DOD and VA are accepting some level of risk by expanding the pilot solely on the basis of early pilot results. In addition to having limited information at this key juncture, pilot planners have yet to designate criteria for moving forward with pilot expansion and have not yet selected a comparison group to identify differences between pilot cases and cases processed under the current system, to allow for assessment of pilot performance. DOD and VA are collecting data on decision times and rating percentages, but have not identified how much improvement in timeliness or consistency would justify expanding the pilot process. Further, pilot planners have not laid out an approach for measuring the pilot’s performance on key metrics— including timeliness and accuracy of decisions—against the current process. Selection of the comparison group cases is a significant decision, because it will help DOD and VA determine the pilot’s impact, compared with the current process, and help planners identify needed corrections and manage for success. An appropriate comparison group might include servicemembers with a similar demographic and disability profile. Not having an appropriate comparison group increases the risk that DOD and VA will not identify problem areas or issues that could limit the effectiveness of any redesigned disability process. Pilot officials stated that they intend to identify a comparison group of non-pilot disability evaluation cases, but have not yet done so. Another key element lacking from current evaluation plans is an approach for surveying and measuring satisfaction of servicemembers and veterans with the pilot process. As noted previously, several high-level commissions identified servicemember confusion over the current disability evaluation system as a significant problem. Pilot planners told us that they intend to develop a customer satisfaction survey and use customer satisfaction data as part of their evaluation of pilot performance but, as of February 2008, the survey was still under development. Even after the survey has been developed, results will take some time to collect and may be limited at key junctures because the survey needs to be administered after servicemembers and veterans have completed the pilot process. Without data on servicemember satisfaction, the agencies cannot know whether or the extent to which the pilot they are implementing has been successful at reducing servicemember confusion and distrust over the current process. Over the past year, the Army has made substantial progress toward improving care for its servicemembers. After problems were disclosed at Walter Reed in early 2007, senior Army officials assessed the situation and have since dedicated significant resources—including more than 2,000 personnel—and attention to improve this important mission. Today, the Army has established Warrior Transition Units at its major medical facilities and doctors, nurses, and fellow servicemembers at these units are at work helping wounded, injured, and ill servicemembers through what is often a difficult healing process. Some challenges remain, such as filling all the Warrior Transition Unit personnel slots in a competitive market for medical personnel, lessening reliance on borrowed personnel to fill slots temporarily, and getting servicemembers eligible for Warrior Transition Unit services into those units. Overall, the Army is to be commended for its efforts thus far; however, sustained attention to remaining challenges and reliable data to track progress will be important to sustaining gains over time. For those servicemembers whose military service was cut short due to illness or injury, the disability evaluation is an extremely important issue because it affects their service retention or discharge and whether they receive DOD benefits such as retirement pay and health care coverage. Once they become veterans, it affects the cash compensation and other disability benefits they may receive from VA. Going through two complex disability evaluation processes can be difficult and frustrating for servicemembers and veterans. Delayed decisions, confusing policies, and the perception that DOD and VA disability ratings result in inequitable outcomes have eroded the credibility of the system. The Army has taken steps to increase the number of staff that can help servicemembers navigate its process, but is challenged to meet stated goals. Moreover, even if the Army is able to overcome challenges and sufficiently ramp up staff levels, these efforts will not address the systemic problem of having two consecutive evaluation systems that can lead to different outcomes. Considering the significance of the problems identified, DOD and VA are moving forward quickly to implement a streamlined disability evaluation that has potential for reducing the time it takes to receive a decision from both agencies, improving consistency of evaluations for individual conditions, and simplifying the overall process for servicemembers and veterans. At the same time, DOD and VA are incurring some risk with this approach because the cases used were not necessarily representative of actual workloads. Incurring some level of risk is appropriate and perhaps prudent in this current environment; however, planners should be transparent about that risk. For example, to date, planners have not yet articulated in their planning documents the extent of data that will be available at key junctures, and the criteria they will use in deciding to expand the pilot beyond the Washington, D.C. area. More importantly, decisions to expand beyond the few sites currently contemplated should occur in conjunction with an evaluation plan that includes, at minimum, a sound approach for measuring the pilot’s performance against the current process and for measuring servicemembers’ and veterans’ satisfaction with the piloted process. Failure to properly assess the pilot before significant expansion could potentially jeopardize the systems’ successful transformation. Mr. Chairman, this completes our prepared remarks. We would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact Daniel Bertoni at (202) 512-7215 or bertonid@gao.gov, or John H. Pendleton at (202) 512-7114 or pendletonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made major contributions to this testimony are listed in appendix I. In addition to the contacts named above, Bonnie Anderson, Assistant Director; Michele Grgich, Assistant Director; Janina Austin; Susannah Compton; Cindy Gilbert; Joel Green; Christopher Langford; Bryan Rogowski; Chan My Sondhelm; Walter Vance; and Greg Whitney, made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In February 2007, a series of Washington Post articles about conditions at Walter Reed Army Medical Center highlighted problems in the Army's case management of injured servicemembers and in the military's disability evaluation system. These deficiencies included a confusing disability evaluation process and servicemembers in outpatient status for months and sometimes years without a clear understanding about their plan of care. These reported problems prompted various reviews and commissions to examine the care and services to servicemembers. In response to problems at Walter Reed and subsequent recommendations, the Army took a number of actions and DOD formed a joint DOD-VA Senior Oversight Committee. This statement updates GAO's September 2007 testimony and is based on ongoing work to (1) assess actions taken by the Army to help ill and injured soldiers obtain health care and navigate its disability evaluation process; and to (2) describe the status, plans, and challenges of DOD and VA efforts to implement a joint disability evaluation system. GAO's observations are based largely on documents obtained from and interviews with Army, DOD, and VA officials. The facts contained in this statement were discussed with representatives from the Army, DOD, and VA. Over the past year, the Army significantly increased support for servicemembers undergoing medical treatment and disability evaluations, but challenges remain. To provide a more integrated continuum of care for servicemembers, the Army created a new organizational structure--the Warrior Transition Unit--in which servicemembers are assigned key staff to help manage their recovery. Although the Army has made significant progress in staffing these units, several challenges remain, including hiring medical staff in a competitive market, replacing temporarily borrowed personnel with permanent staff, and getting eligible servicemembers into the units. To help servicemembers navigate the disability evaluation process, the Army is increasing staff in several areas, but gaps and challenges remain. For example, the Army expanded hiring of board liaisons to meet its goal of 30 servicemembers per liaison, but as of February 2008, the Army did not meet this goal at 11 locations that support about half of servicemembers in the process. The Army faces challenges hiring enough liaisons to meet its goals and enough legal personnel to help servicemembers earlier in the process. To address more systemic issues, DOD and VA promptly designed and are now piloting a streamlined disability evaluation process. In August 2007, DOD and VA conducted an intensive 5-day exercise that simulated alternative pilot approaches using previously-decided cases. This exercise yielded data quickly, but there were trade-offs in the nature and extent of data that could be obtained in that time frame. The pilot began with "live" cases at three treatment facilities in the Washington, D.C. area in November 2007, and DOD and VA may consider expanding the pilot to additional sites around July 2008. However, DOD and VA have not finalized their criteria for expanding the pilot beyond the original sites and may have limited pilot results at that time. Significantly, current evaluation plans lack key elements, such as an approach for measuring the performance of the pilot--in terms of timeliness and accuracy of decisions--against the current process, which would help planners manage for success of further expansion.
Federal crop insurance protects participating farmers against the financial losses caused by events such as droughts, floods, hurricanes, and other natural disasters. In 1995, crop insurance premiums were about $1.5 billion. USDA’s Risk Management Agency administers the federal crop insurance program through FCIC. Federal crop insurance offers farmers two primary types of insurance coverage. The first—called catastrophic insurance—provides protection against extreme crop losses for the payment of a $50 processing fee, whereas the second—called buyup insurance—provides protection against more typical smaller crop losses in exchange for a premium paid by the farmer. FCIC conducts the program primarily through private insurance companies that sell and service federal crop insurance—both catastrophic and buyup—for the federal government and retain a portion of the insurance risk. FCIC also offers catastrophic insurance through the local offices of USDA’s Farm Service Agency. FCIC pays the companies a fee, called an administrative expense reimbursement, that is intended to reimburse the companies for the expenses reasonably associated with selling and servicing crop insurance to farmers. The reimbursement is calculated as a percentage of the premiums received, regardless of the expenses incurred by the companies. Beginning in 1994, companies were required to report expenses in a consistent format following standard industry guidelines to provide FCIC with a basis for establishing future reimbursement rates. For buyup crop insurance, FCIC reduced the administrative expense reimbursement from a base rate of 34 percent of the premiums on policies sold from 1988 through 1991 to 31 percent of the premiums from 1994 through 1996. The 1994 reform act requires FCIC to reduce the reimbursement rate to no more than 29 percent of total premiums in 1997, no more than 28 percent in 1998, and no more than 27.5 percent in 1999. FCIC can set the rate lower than these mandated ceilings. In addition, the companies earn profits when insurance premiums exceed losses on policies for which they retain risk. These profits are called underwriting gains. Since 1990, companies selling crop insurance have earned underwriting gains totaling more than $500 million. FCIC had agreements with 22 companies in 1994 and 19 companies in 1995 to sell and service federal crop insurance. In 1995, the insurance companies sold about 80 percent of all federal crop insurance, while USDA’s Farm Service Agency sold the remainder. In performing our review, we examined expenses at nine companies representing about 85 percent of the total federal crop insurance premiums written by private companies in 1994 and 1995. We chose the companies considering factors such as premium volume, location, and type of ownership. In 1994 and 1995, FCIC’s administrative expense reimbursements to participating companies selling buyup insurance—31 percent of premiums—were higher than the expenses that can be reasonably associated with the sale and service of federal crop insurance. For the 2-year period, FCIC reimbursed the nine companies we reviewed about $580 million. For this period, the companies reported expenses of about $542 million to sell and service crop insurance—a difference of about $38 million. However, our review showed that about $43 million of the companies’ reported expenses could not be reasonably associated with the sale and service of federal crop insurance. Therefore, we believe that these expenses should not be considered by FCIC in determining an appropriate future reimbursement rate for administrative expenses. Furthermore, we found that a number of the reported expenses appeared excessive for reimbursement through a taxpayer-supported program and suggest an opportunity to further reduce future reimbursement rates for administrative expenses. Finally, a variety of factors have emerged since the period covered by our review that have increased companies’ revenues or may decrease their expenses, such as higher crop prices and premium rates and reduced administrative requirements. These factors should be considered in determining future reimbursement rates. Our review showed that about $43 million of the companies’ reported expenses could not be reasonably associated with the sale and service of federal crop insurance. These expenses, which we believe should not be considered in determining an appropriate future reimbursement rate for administrative expenses, included expenses for acquiring competitors’ businesses, protecting companies from underwriting losses, sharing company profits through bonuses or management fees, and lobbying expenses. Among the costs reported by the crop insurance companies that did not appear to be reasonably associated with the sale and service of crop insurance to farmers were those related to costs the companies incurred when they acquired competitors’ business. These costs potentially aided the companies in vying for market share and meant that one larger company, rather than several smaller companies, was delivering crop insurance to farmers. However, this consolidation was not required for the sale and service of crop insurance to farmers, provided no net benefit to the crop insurance program, and according to FCIC, was not an expense that FCIC expected its reimbursement to cover. For example, one company took over the business of a competing company under a lease arrangement. The lease payment totaled $3 million in both 1994 and 1995. About $400,000 of this payment could be attributed to actual physical assets the company was leasing, and we recognized that amount as a reasonable expense. However, the remaining $2.6 million—which the company was paying each year for access to the former competitor’s policyholder base—provided no benefit to the farmer and no net value to the crop insurance program. Likewise, we saw no apparent benefit to the crop insurance program from the $1.5 million the company paid executives of the acquired company over the 2-year period as compensation for not competing in the industry. In total, we identified costs in this general category totaling about $12 million for the 2-year period. We also found that two companies included payments to commercial reinsurers among their reported crop insurance delivery expenses. These are payments the companies made to other insurance companies to expand their protection against potential underwriting losses. This commercial reinsurance allows companies to expand the amount of insurance they are permitted to sell under insurance regulations while limiting their underwriting losses. The cost of reinsurance relates to company decisions to manage underwriting risks rather than to the sale and service of crop insurance to farmers. We discussed this type of expense with FCIC, and it agreed that this expense should be paid from companies’ underwriting revenues and thus should not be considered in determining a future reimbursement rate for administrative expenses. For the two companies that reported reinsurance costs as an administrative expense, these expenses totaled $10.7 million over the 2 years. Furthermore, we found that some companies included as administrative expenses for selling and servicing crop insurance, expenses that resulted from decisions to distribute profits to (1) company executives and employees through bonuses or (2) parent companies through management fees. We found that profit-sharing bonuses were a significant component of total salary expenses at one company, equaling 49 percent of basic salaries in 1994 and 63 percent in 1995. These bonuses totaled $9 million for the 2 years. While company profit sharing may benefit a company in competing with another company for employees, the bonuses do not contribute to the overall sale and service of crop insurance or serve to enhance program objectives. Furthermore, while we recognize that performance-based employee bonuses and bonuses paid to agents represent reasonable expenses, the profit-sharing bonuses in this example did not appear to be reasonable program expenses because they were paid out of profits after all necessary program expenses were paid. Additionally, we identified profit-sharing bonuses totaling $2.1 million reported as expenses at three other companies for 1994 and 1995. In total, we found expenditures in this general category amounting to $12.2 million over the 2 years. Similarly, we noted that two companies reported expenditures for management fees paid to parent companies as crop insurance administrative expenses. Company representatives provided few examples of tangible benefits received in return for their payment of the management fee. We recognized management fees as a reasonable program expense to the extent that companies could identify tangible benefits received from parent companies. Otherwise, we considered payment of management fees to be a method of sharing income with the parent company and paid in the form of a before-profit expense item rather than a dividend. These expenses totaled $1.1 million for the 2 years. FCIC’s standard reinsurance agreement with the companies precludes them from reporting expenditures for lobbying as crop insurance delivery expenses. Despite this prohibition, we found that the companies included a total of $418,400 for lobbying in their expenses reported for 1994 and 1995. The vast majority of these expenses involved the portion of companies’ membership dues attributable to lobbying by crop insurance trade associations. Adjusting for these and other expenses reported in error, we determined, and FCIC concurred, that the expense rate for companies’ expenses reasonably associated with the sale and service of buyup crop insurance in 1994-95 was about 27 percent of premiums. This is about 4 percentage points, or $81 million, less than the reimbursement FCIC provided. Of these 4 percentage points, 2 points reflect companies’ reported expenses that were less than their reimbursement; the remainder reflect adjustments to their reported expenses that did not appear to be reasonably associated with the sale and service of crop insurance. In addition, we found a number of expenses reported by the companies that, although associated with the sale and service of crop insurance, seemed to be excessive for a taxpayer-supported program. While difficult to fully quantify, these types of expenditures suggest that opportunities exist for the government to reduce its future reimbursement rate for administrative expenses while still adequately reimbursing companies for the reasonable expenses of selling and servicing crop insurance policies. For example, in the crop insurance business, participating companies compete with each other for market share through the sales commissions paid to independent insurance agents. To this end, companies offer higher commissions to agents to attract them and their farmer clients from one company to another. When an agent switches from one company to another, the acquiring company increases market share, but there is no net benefit to the crop insurance program. On average, the nine companies in our review paid agents sales commissions of 16 percent of buyup premiums they sold in 1994 and 16.2 percent in 1995. However, one company paid more—an average of about 18.1 percent of buyup premiums sold in 1994 and 17.5 percent in 1995. When this company, which accounted for about 15 percent of all sales in these 2 years, is not included in the companies’ average, commission expenses for the other eight companies averaged 15.6 percent of buyup premiums in 1994 and 15.8 percent in 1995. This company paid its agents about $6 million more than the amount it would have paid had it used the average commission rate paid by the other eight companies. Furthermore, in our review of company-reported expenses, at eight of the nine companies, we found instances of expenses that seemed to be excessive for conducting a taxpayer-supported program. For example, we found that one company in our sample for 1994 reported expenses of $8,391 to send six company managers (four accompanied by their spouses) to a 3-day meeting at a resort location. The billing from the resort included rooms at $323 per night, $405 in golf green fees, $139 in charges at a golf pro shop, and numerous restaurant and bar charges. Our sample for 1995 included a $31,483 billing from the same resort for lodging and other costs associated with a company “retreat” costing $46,857 in total. In another instance, as part of paying for employees to attend industry meetings at resort locations, we found that one company paid for golf tournament entry fees, tickets to an amusement park, spouse travel, child care, and pet care, and reported these as crop insurance delivery expenses. Our review of companies’ expenses also showed that some companies’ entertainment expenditures appeared excessive for selling and servicing crop insurance to farmers. For example, one company spent about $44,000 in 1994 for a Canadian fishing trip for a group of company employees and agents. It also spent about $18,000 to rent and furnish a sky box at a baseball stadium. Company officials said that the expenditures were necessary to attract agents to the company. These expenditures were reported as travel expenses in 1994 and as advertising expenses in 1995. Moreover, the company’s 1995 travel expenses included $22,000 for a trip to Las Vegas for several company employees and agents. Similarly, our sample of companies’ expenditures disclosed payments for season tickets to various professional sports events at two other companies; and six companies paid for country club memberships and related charges for various company officials and reported these as expenses to sell and service crop insurance. While a number of the companies believe that the type of expenses described above are important to maintaining an effective sales force and supporting their companies’ mission, we, along with FCIC, believe that most of these expenses appear to be excessive for a program supported by the American taxpayers. Since the period covered by our review, a variety of factors have emerged that have increased companies’ revenues or may decrease companies’ expenses. Crop prices and premium rates increased in 1996 and 1997, thereby generating higher premiums. This had the effect of increasing the reimbursements paid to companies for administrative expenses by about 3 percent of premiums without a proportionate increase in workload for the companies. Moreover, FCIC and the industry’s efforts to simplify the program’s administrative requirements may reduce companies’ workload, thereby reducing their administrative expenses. As of January 1997, FCIC had completed 26 simplification actions and was continuing to study 11 additional potential actions. Neither FCIC nor the companies could precisely quantify the amount of savings that companies can expect from these changes, but they agreed that the changes were necessary and collectively may reduce costs. In 1995, the government’s total cost to deliver catastrophic insurance policies was less through USDA than through private companies. The total cost to the government to deliver catastrophic insurance consists of three components: (1) the basic sales and service delivery costs, (2) offsetting income from processing fees paid by farmers, and (3) company-earned underwriting gains. When only the first and second components were considered, the costs to the government for both delivery systems were comparable. However, the payment of an underwriting gain to companies, the third component, made the total 1995 cost of delivery through private companies more expensive to the government. With respect to the first component—basic sales and service delivery costs—the cost to the government was higher in 1995 when provided through USDA. The government’s costs for basic sales and service delivery through USDA included expenses associated with activities such as selling and processing policies; developing computer software; training adjusters and adjusting claims. These costs also included indirect or overhead costs, such as general administration, rent, and utilities. Also included in the 1995 direct and indirect costs for USDA’s delivery were the Department’s one-time start-up costs for establishing its delivery system. Direct costs for basic delivery through USDA amounted to about $91 per crop policy, and indirect costs amounted to about $42 per crop policy, for a total basic delivery cost to the government of about $133 per crop policy. The basic delivery cost to the government for company delivery consisted of the administrative expense reimbursement paid to the companies by FCIC and the cost of administrative support provided by USDA’s Farm Service Agency. The administrative expense reimbursement paid to the companies amounted to about $73 per crop policy, and USDA’s support costs amounted to about $10 per crop policy, for a total basic delivery cost to the government for company delivery of about $83 per crop policy. The second component—offsetting income from farmer-paid processing fees—reduced the basic delivery costs to the government for both delivery systems. For USDA’s delivery, processing fees paid by farmers and remitted to the Treasury reduced the government’s basic delivery cost of about $133 by an average of $53 per crop policy. For company delivery, fees paid by farmers and remitted to the government reduced the government’s basic delivery cost of about $83 by $7 per crop policy. For company delivery, the effect on the cost to the government was relatively small because the 1994 reform act authorized the companies to retain the fees they collected from farmers up to certain limits. Only those fees that exceeded these limits were remitted back to the government. Combining the basic sales and service delivery costs and the offsetting income from farmer-paid processing fees, the government’s costs were comparable for both delivery systems. The third component—underwriting gains paid by FCIC only to the companies—is the element that made delivery through the companies more expensive in 1995. The insurance companies can earn underwriting gains in exchange for taking responsibility for any claims resulting from those policies for which the companies retain risk. In 1995, companies earned an underwriting gain of an estimated $45 million, or about a 37-percent return, on the catastrophic premiums for which they retained risk. This underwriting gain increased the government’s delivery cost for company delivery by $127 per crop policy. Underwriting gains are, of course, not guaranteed. In years with a high incidence of catastrophic losses, companies could experience net underwriting losses, meaning that they would have to pay out money from their reserves in excess of the premium paid to them by the government, potentially reducing the government’s total cost of company delivery in such years. The 37-percent underwriting gain received by the companies on catastrophic policies in 1995 substantially exceeded FCIC’s long-term target. According to FCIC, the large underwriting gains in 1995 may have been unusual in that there were relatively few catastrophic loss claims and many farmers did not provide sufficient data on their production capabilities. In 1996, however, the underwriting gains on catastrophic policies were even higher—$58 million. The current arrangement for reimbursing companies for their administrative expenses—under which FCIC pays private companies a fixed percentage of premiums—has certain advantages, including ease of administration. However, expense reimbursement based on a percentage of premiums does not necessarily reflect the amount of work involved to sell and service crop insurance policies. Alternative reimbursement arrangements, including, among others, those that would (1) cap the reimbursement per policy or (2) pay a flat dollar amount per policy plus a reduced fixed percentage of premiums, offer the potential to better match FCIC’s reimbursements with companies’ administrative expenses. Each alternative has advantages and disadvantages, and we make no recommendation concerning which alternative, if any, should be pursued. With respect to the first alternative, FCIC could reduce its total expense reimbursements to companies by capping, or placing a limit on, the amount it reimburses companies for the sale and service of crop insurance policies. Savings would vary depending on where the cap is set. Capping the expense reimbursement at around $1,500 per policy, for example, would result in a potential savings of about $74 million while affecting less than 10 percent of the individual policies written in 1995. Under the current reimbursement arrangement, as policy premiums increase, the companies’ reimbursement from FCIC for administering the policies increases. However, the workload, or cost, associated with administering the policy does not increase proportionately. Therefore, for policies with the highest premiums, there is a large differential between FCIC’s reimbursement and the costs incurred to administer those particular policies. For example, in 1995, the largest 3 percent of the policies received about one-third of the total reimbursement. In fact, the five largest policies in 1995 generated administrative expense reimbursements ranging from about $118,000 to $472,000. Alternatively, FCIC could reduce its total expense reimbursements to companies by paying a flat dollar amount per policy plus a reduced fixed percentage of premiums. FCIC could reimburse companies a fixed amount for each policy written to pay for the fixed expenses associated with each policy as well as a percentage of premium to compensate companies for the variable expenses associated with the size and value of a policy. For example, paying a flat $100 per policy plus 17.5 percent of premium could result in a potential savings of about $67 million. FCIC has included this alternative in its proposed 1998 standard reinsurance agreement with the industry. As we discuss in more detail in our report, while these and other alternative reimbursement methods could result in lower cost reimbursements to insurance companies, some methods may increase FCIC’s own administrative expenses for reporting and compliance. Some alternatives may also assist smaller companies to compete more effectively with larger companies and/or encourage more service to smaller farmers than does the current system. Companies generally prefer FCIC’s current reimbursement method because of its administrative simplicity. In conclusion, we recommended that the Administrator of the Risk Management Agency determine an appropriate reimbursement rate for selling and servicing crop insurance and include this rate in the new reinsurance agreement currently being developed between FCIC and the companies. Furthermore, we recommended that the Administrator explicitly convey the type of expenses that the administrative reimbursement is intended to cover. USDA’s Risk Management Agency agreed with our recommendations and has included these changes in the proposed 1998 agreement now being developed. The crop insurance industry disagreed with the methodology, findings, conclusions, and recommendations presented in our report. It expressed concern that we were not responsive to the mandate in the 1994 act and did not appropriately analyze company data. It also expressed concern that implementing GAO’s recommendations could destabilize the industry. We carefully reviewed the industry’s comments and continue to believe that our report fulfills the intent of the mandate, our methodology is sound, our report’s findings and conclusions are well supported, and our recommendations offer reasonable suggestions for reducing the costs of the crop insurance program. This completes my prepared statement. I will be happy to respond to any questions you may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the: (1) adequacy of the administrative expense reimbursement paid by the U.S. Department of Agriculture's (USDA) Federal Crop Insurance Corporation (FCIC) to participating insurance companies for selling and servicing crop insurance; and (2) comparative cost to the government of delivering catastrophic crop insurance through USDA and the private sector. GAO noted that: (1) for the 1994 and 1995 period it reviewed, GAO found that the administrative expense reimbursement rate of 31 percent of premiums paid to insurance companies resulted in reimbursements that were $81 million more than the companies' expenses for selling and servicing crop insurance; (2) furthermore, GAO found that some of these reported expenses did not appear to be reasonably associated with the sale and service of federal crop insurance and accordingly should not be considered in determining an appropriate future reimbursement rate for administrative expenses; (3) among these expenses were those associated with acquiring competitors' businesses, profit sharing bonuses, and lobbying; (4) in addition, GAO found other expenses that appeared excessive for reimbursement through a taxpayer-supported program; (5) these expenses suggest an opportunity to further reduce future reimbursement rates; (6) these expenses included agents' commissions that exceeded the industry average, unnecessary travel-related expenses, and questionable entertainment activities; (7) finally, a variety of factors that have emerged since the period covered by GAO's review have increased companies' revenues or may decrease companies' expenses; (8) crop prices and premium rates increased in 1996 and 1997, generating higher premiums; (9) this had the effect of increasing FCIC's expense reimbursement to companies; (10) at the same time, companies' expenses associated with crop insurance sales and service could decrease as FCIC reduces the administrative requirements with which the companies must comply; (11) combined, all these factors indicate that FCIC could lower the reimbursement to a rate in the range of 24 percent of premiums and still amply cover reasonable company expenses for selling and servicing federal crop insurance policies; (12) regarding the cost of catastrophic insurance delivery, GAO found that, in 1995, the government's total costs to deliver catastrophic insurance were less through USDA than private companies; (13) although the basic costs associated with selling and servicing catastrophic crop insurance through USDA and private companies were comparable, total delivery costs were less through USDA because USDA's delivery avoids the need to pay an underwriting gain to companies; (14) finally, GAO identified a number of different approaches to reimbursing companies for their administrative expenses that offer the opportunity for cost savings; and (15) companies generally prefer the existing reimbursement method because of its relative administrative simplicity.
DOD’s Joint Exercise Program provides an opportunity for combatant commanders to (1) train to the mission capability requirements described in the Joint Mission-Essential Task List and (2) support theater or global security cooperation requirements as directed in theater or in global campaign plans. All nine of the combatant commands, as well as the four military services, conduct exercises as a part of the Joint Exercise Program. The mission for the four combatant commands we visited are as follows: NORTHCOM conducts homeland defense, civil support, and security cooperation to defend and secure the United States and its interests. PACOM, with assistance from other U.S. government agencies, protects and defends the United States, its people, and its interests. In conjunction with its allies and partners, PACOM’s goal is to enhance stability in the Indo-Asia-Pacific region by promoting security cooperation, responding to contingencies, deterring aggression, and when necessary, fighting to win. STRATCOM conducts global operations in coordination with other combatant commands, military services, and appropriate U.S. government agencies to deter and detect strategic attacks against the United States, its allies, and partners. TRANSCOM provides a full-spectrum of global mobility solutions and related enabling capabilities for supported customers’ requirements in peace and war. The key players with roles and responsibilities in the Joint Exercise Program are as follows: Principal Deputy Assistant Secretary of Defense for Readiness, whose responsibilities include administering the Combatant Commanders Exercise Engagement and Training Transformation account; Director for Joint Force Development Joint Staff (J7), whose responsibilities include managing the Combatant Commanders Exercise Engagement and Training Transformation account and providing enabling capabilities that support combatant commands’ and the military services’ training; combatant commands, who develop, publish, and execute command Joint Training Plans and joint training programs for command staff and assigned forces; and military services, whose responsibilities include providing trained and ready forces for joint employment and assignment to combatant commands. In fiscal year 2016, the Combatant Commanders Exercise Engagement and Training Transformation account provided approximately $600 million dollars to fund more than 150 training events. Funding from this account covers items such as personnel travel and per diem for planning conferences and exercise support events, transportation of cargo, airlift, sealift and port handling, intra-theater transportation for participating units, consultant advisory and assistance service, equipment and supplies, and operation and maintenance for training support facilities and equipment. From fiscal year 2013 through fiscal year 2016, funding for this account decreased by about $149 million, or by 20 percent, while the number of exercises conducted remained relatively unchanged (see fig. 1). DOD officials told us that in part as a result of reduced funding for the Joint Exercise Program they have at times reduced the scope of exercises or sought alternative methods, such as relying on organic lift capabilities of Service components or partnering with another combatant command to execute exercises. Other factors that could impact the ability of combatant commands to execute exercises include the availability of forces; diplomatic (political and military) considerations; and real world events, such as natural disasters. Though DOD officials stated that these factors are largely outside of the sphere of combatant commander influence and therefore are not included in the original planning of the exercises, officials stated that they use various approaches to try to mitigate the effect these factors have on their ability to carry out their respective joint exercise programs. If these factors cannot be mitigated, the combatant command might cancel a joint exercise, a mitigation strategy of last resort. DOD has developed a body of guidance for the Joint Exercise Program and is working to update a key outdated guidance document that identifies overarching roles and responsibilities for military training in accordance with a congressional requirement in a House Committee on Armed Services report accompanying the National Defense Authorization Act for Fiscal Year 2017. DOD-wide guidance, policies, and procedures addressing various aspects of the Joint Exercise Program are contained in the following documents: DODD 1322.18, Military Training, (Jan. 13, 2009), is the overarching guidance for military training and identifies the roles and responsibilities for training military individuals; units; DOD civilian employees; and contractors, among others. The Program Goals and Objectives document provides guidance for all programs and activities that utilize funds from the Combatant Commanders Exercise Engagement and Training Transformation account. CJCSI 3500.01H, Joint Training Policy for the Armed Forces of the United States, (April 25, 2014), establishes guidance for the Joint Training System—an integrated, requirements-based, four-phased approach that is used to align a combatant commander’s Joint Training Strategy with assigned missions to produce trained and ready individuals, staff, and units. The Joint Training System is used by combatant commanders to execute the Joint Exercise Program as shown in figure 2. See appendix II for a more detailed description of the Joint Training System. CJCSN 3500.01, 2015-2018 Chairman’s Joint Training Guidance, (Oct. 30, 2014), provides the Office of the Chairman of the Joint Chiefs of Staff’s joint training guidance to all DOD components for the planning, execution, and assessment of joint individual and collective training for fiscal years 2015 through 2018. CJCSM 3500.03E, Joint Training Manual for the Armed Forces of the United States, (April 20, 2015), provides guidance and procedures for the Joint Training System. Specifically, it focuses on determining joint training requirements, planning and executing joint training, and assessing joint training. CJCSM 3511.01, Joint Training Resources of the Armed Forces of the United States, (May 26, 2015), provides detailed guidance on joint funding, joint transportation, and joint training support resources for joint training exercises. CJCSI 3150.25F, Joint Lessons Learned Program, (June 26, 2015), provides guidance for gathering, developing, and disseminating joint lessons learned for the armed forces for joint training exercises. Each of the combatant commands we visited had developed their own implementation guidance, which is consistent with DOD’s guidance for the Joint Exercise Program. While DOD has a body of guidance for the Joint Exercise Program, DODD 1322.18 – key overarching guidance for military training that identifies roles and responsibilities for training, including the for the Joint Exercise Program–is outdated. Specifically, this directive assigns significant roles and responsibilities relevant to the Joint Exercise Program to U.S. Joint Forces Command, a combatant command that has not existed since August 2011. For example, according to DODD 1322.18, U.S. Joint Forces Command is responsible for working through the Office of the Chairman of the Joint Chiefs of Staff to manage joint force training, accredit joint training programs for designated joint tasks, and provide Combatant Commanders Exercise Engagement and Training Transformation funds to support the Joint Exercise Program. According to subsequent guidance issued in April 2014, the Office of the Chairman of the Joint Chiefs of Staff was assigned the roles and responsibilities formerly performed by U.S. Joint Forces Command. Additionally, the Office of the Assistant Secretary of Defense for Readiness, instead of U.S. Joint Forces Command, now administers the Combatant Commanders Exercise Engagement and Training Transformation account, which funds the Joint Exercise Program. In House Report 114-537, the House Committee on Armed Services also noted that DODD 1322.18 is outdated and does not account for significant organizational changes that have occurred within the department—specifically, the disestablishment of U.S. Joint Forces Command and the establishment of the Assistant Secretary of Defense for Readiness. Consequently, the report directs DOD to update its guidance and brief the committee on its progress updating the guidance by December 1, 2016. According to an official from Office of the Assistant Secretary of Defense for Readiness, the office responsible for DODD 1322.18, the department is aware that the directive is outdated and is working on updating it but is unsure of when the update process will be completed. Specifically, according to this DOD official, the department is working to determine whether the directive can be updated through an administrative update, which requires less coordination and time to process than doing so through a total reissuance of guidance. When DOD completes the update and includes information on current roles and responsibilities, the key guidance regarding the Joint Exercise Program should be consistent with other guidance. DOD has implemented an approach to assess the return on investment for the Joint Exercise Program. The Director of the Joint Assessment and Enabling Capability office stated that officials from that office provide the combatant commands with guidance for how to develop performance measures to assess the effectiveness of the Joint Exercise Program. Specifically, the combatant commands, in conjunction with this office, develop performance measures using an approach that is aimed at ensuring the performance measures are specific, measurable, achievable, realistic, and time-phased (commonly referred to as the SMART rubric). The Director of the Joint Assessment and Enabling Capability office reviews the performance measures created by the combatant commands against the SMART rubric and provides input and coaching on improving the measures through an ongoing and collaborative process. See appendix III for a more detailed explanation of DOD’s approach for assessing individual joint exercises. The Joint Assessment and Enabling Capability office is working with individual combatant commands to develop measures to assess the return on investment of the Joint Exercise Program using the SMART rubric approach. For example, NORTHCOM officials told us that they are working with the Joint Assessment and Enabling Capability office to develop a better method to measure the return on investment for NORTHCOM joint exercises because the ones they currently use, such as the number of joint mission-essential tasks in an exercise, do not reveal any information that would be helpful for decision making. The officials stated that they are still trying to determine the threshold for the amount of information that is necessary to measure return on investment for their joint exercises. Further, officials stated that they were drafting performance measures to assess return on investment to submit to their leadership for approval. Additionally, TRANSCOM officials told us that they too are working with the Joint Assessment and Enabling Capability office but have not yet determined how to effectively gauge return on investment for training dollars spent on its exercises. According to DOD and combatant command officials we interviewed, readiness is their key performance measure and they have ongoing efforts to develop more tangible, quantifiable measures to determine the return on investment for conducting exercises. However, according to combatant command officials, return on investment is sometimes intangible and may not be seen immediately. Officials stated that it could take years to recognize the return on investment for conducting an exercise. For example, PACOM officials told us that they conducted two multinational planning exercises and a multinational force standard operating procedures workshop designed to increase the speed of initial response forces to an emergent issue and enhance relationships with partner countries for several years. According to PACOM officials, the return on that investment, however, was not realized until April 25, 2015, when a region northwest of Kathmandu, Nepal, was devastated by a 7.8 magnitude earthquake and the after-action report for that earthquake indicated that PACOM’s exercises were vital in preparing the Nepal Army for its response. DOD uses two key information technology systems—JTIMS and the Execution Management System—to manage the execution of the Joint Exercise Program, but DOD does not have assurance that the Execution Management System produces quality information. DOD uses JTIMS as the system of record for the Joint Exercise Program and combatant commanders plan and manage their joint training exercises through JTIMS. JTIMS automates the management of joint exercise training data through a web-based system and supports the application of the four phases of the Joint Training System. Specifically, JTIMS, which is managed by the Office of the Chairman of the Joint Chiefs of Staff, is used, among other things, to (1) request and track forces for joint training exercises, (2) publish the Joint Training Plan, (3) document and manage joint training programs, and (4) capture the assessments of exercises. Chairman of the Joint Chiefs of Staff policy requires the use of JTIMS for a number of fields. For example, guidance requires, among other things, that the combatant commands enter key information about a training exercise, such as its objectives, intended audience (i.e., the joint forces being trained), lessons learned, and observations on performance, and costs. However, the extent to which these fields are used, and the quality of the data entered varies by combatant command. Combatant commands and other DOD entities rely on the information entered in JTIMS to both conduct their exercises and participate in exercises sponsored by other combatant commands. However, during the course of our review, we were informed of and observed significant variation in the type and quality of information entered in JTIMS. For example, officials from one combatant command stated that an exercise description entered by another combatant command did not provide sufficient detail, therefore making it difficult to understand the focus of the exercise. In addition, TRANSCOM officials randomly selected exercises in JTIMS to show us the type of information entered in the system and we noted that the level of detail provided sometimes varied significantly by combatant command. Furthermore, officials from two of the four combatant commands we visited stated that sometimes the information captured in JTIMS is not useful and could negatively affect their ability to coordinate training with other combatant commands or extract pertinent information about exercises from the system that would be helpful in planning them. According to an Office of the Chairman of the Joint Chiefs of Staff official, it is important that combatant commands enter information in JTIMS in a consistent and standardized manner so that the information is easily understood and useful for all joint exercise training participants and planners. The Office of the Chairman of the Joint Chiefs of Staff and combatant command officials told us that the lack of standardized information in JTIMS is due to the absence of detailed instructions in guidance on inputting information into JTIMS. Consequently, to help improve the consistency and standardization of information across combatant commands, the Office of the Chairman of the Joint Chiefs of Staff published a user guide for JTIMS that is intended to mitigate inconsistencies in the information entered there, standardize the use of the system across DOD, and improve the overall understanding of the system. According to this official, the user guide was completed in October 2016. The Office of the Chairman of the Joint Chiefs Staff plans to periodically update the user guide to keep pace with joint training policy updates, JTIMS software upgrades, and joint training enterprise business rule modifications. In addition to providing step-by-step instructions on using JTIMS, the guide also provides examples of the type of information that should be entered in specified fields. Such information should help improve the overall understanding of and bring consistency to the use of JTIMS across the combatant commands. DOD uses the Execution Management System, a web-based database, to track and oversee the most recent execution performance data, hereafter referred to as data, for the Joint Exercise Program. The Execution Management System is intended to capture the most recent data for the Joint Exercise Program. According to officials, it is important to have accurate and current data in the Execution Management System because it provides instant status of the over- and underexecution of funds for the Joint Exercise Program, which is critical to the efficient and effective execution of the Joint Exercise Program. Moreover, officials from the Assistant Secretary of Defense for Readiness office stated that data from this system are used to report how funds are being expended for the Joint Exercise Program to both DOD decision makers and Congress. In April 2016, the Office of the Assistant Secretary of Defense for Readiness issued guidance on the use of the Execution Management System to the combatant commands. This guidance, referred to as the Execution Management System Standard Operating Procedure and User Guide, states that Joint Exercise Program managers are required to (1) enter the most recent obligation and expenditure amounts for any transactions funded through the Combatant Commanders Exercise Engagement and Training Transformation account on a monthly basis and (2) upload supporting documentation for transactions. The guide also specifies the type of supporting documents that should be uploaded into the Execution Management System, such as awarded contracts, invoices, and travel payments. Prior to issuing guidance in April 2016, an official responsible for administering the Combatant Commanders Exercise Engagement and Training Transformation account stated that the combatant commands were informed of the requirement to upload supporting documentation into the Execution Management System in 2011. NORTHCOM, STRATCOM, and PACOM officials told us that they were aware of this requirement prior to April 2016. TRANSCOM officials initially stated that they were unaware of the requirement; however, our review of the Execution Management System revealed that they were uploading supporting documentation for some fiscal years prior to the guidance being issued. During our review of the Execution Management System, we found that the combatant commands we visited had not fully implemented the guidance and that the quality of information in the system was questionable. Specifically, we found that: The Execution Management System is missing supporting documentation. Based on our review, we found that two— STRATCOM and NORTHCOM—of the four combatant commands we visited had uploaded supporting documentation, as required by the Execution Management System guidance, for fiscal years 2013-16. A third combatant command, TRANSCOM, uploaded supporting documentation for fiscal years 2013, 2014, and 2016, but did not upload supporting documentation for fiscal year 2015. The fourth combatant command, PACOM, did not upload supporting documentation for fiscal years 2013, 2014, and 2015, but began uploading supporting documentation in August 2016 for fiscal year 2016 after we informed an official in the Office of the Assistant Secretary of Defense for Readiness that the command had not been uploading supporting documentation in accordance with the Execution Management System guidance. TRANSCOM and PACOM officials stated that one of the reasons they did not upload supporting documentation, as required by guidance, was due to the volume of travel and other related documents generated in executing joint training exercises. Officials stated that it was overly burdensome to upload all of these documents. Nonetheless, an official from the Office of the Assistant Secretary of Defense for Readiness stated the combatant commands need to do their due diligence in uploading supporting documentation in order to ensure proper accountability of Combatant Commanders Exercise Engagement and Training Transformation funds. This official further stated that efforts are underway, that includes establishing a new method for how funds are distributed to stakeholders, to identify approaches that will reduce the data entry burden at the stakeholder level. Documentation for expenditures uploaded into the Execution Management System did not match reported total expenditures for any of the four combatant commands we visited. Based on our review of a nongeneralizable sample of supporting documentation for fiscal years 2014 through 2016 that was uploaded into the Execution Management System, we found that the sum of the individual expenditures reported in supporting documentation did not match the corresponding total expenditures entered in the system for any of the four combatant commands we visited. According to one combatant command official familiar with this system, individual expenditures reported in supporting documents should be reconcilable to yearly cumulative totals for expenditures. However, when we attempted to link the sum of individual expenditures reported in uploaded supporting documentation to total expenditures data entered into the Execution Management System by combatant command officials, we were unable to do so for three of the four combatant commands we visited. For example, in fiscal year 2015, NORTHCOM uploaded more than 100 documents that supported how funds were obligated or committed. Our review found that the uploaded documentation supported approximately $12.7 million in funds that were committed. However, the figure entered in the Execution Management System was about $11.9 million. Similarly, in fiscal year 2014, TRANSCOM supporting documentation showed that commitments totaled approximately $66.8 million while the figure entered in the Execution Management System was approximately $4.6 million. Officials stated that the reason that the supporting documentation does not match the figures entered in the Execution Management System is that some supporting documentation had not been uploaded. Nonetheless, the inability to reconcile supporting documentation with the expenditures entered in the Execution Management System undermines the quality of the data in the system and inhibits DOD decision makers, particularly those in the Office of the Assistant Secretary of Defense for Readiness, from providing adequate oversight of how funds are being expended in support of the Joint Exercise Program goals. Moreover, the inconsistent uploading of the required supporting documentation and difficulty in reconciling individually reported transactions with cumulative values entered into the system suggests that weaknesses exist in the Execution Management System data entry procedures, which impacts the quality of the data entered in the system. Therefore, it calls into question the use of the Engagement Management System which, according to DOD officials, had been established to provide real-time, accurate information on the execution of Joint Exercise Program funds to decision makers. DOD has not implemented key processes to help ensure that the Execution Management System produces quality information. Further weakening the quality of the reporting, tracking, and reconciliation of data recorded in the Execution Management System is that none of the four combatant commands we visited, the Office of the Chairman of the Joint Chiefs of Staff, and the Office of the Assistant Secretary of Defense for Readiness had instituted key systemic processes to help ensure that the data entered in the Execution Management System produce quality information—that is, information that is appropriate, current, complete, accurate, accessible, and timely. Standards for Internal Control in the Federal Government states that a variety of control activities should be used for information systems to support the completeness, accuracy, and validity of information processing, and the production of quality information. In addition, management should evaluate information processing to ensure that it is complete, accurate, and valid. Further, these standards state that appropriate documentation of transactions should be readily available for examination. Using these internal controls could reduce to an acceptable level the risk that a significant mistake could occur and remain undetected and uncorrected. Individuals from all four of the combatant commands we visited stated that only one person at their combatant command was responsible for entering data into the Execution Management System for their respective command and that, although they believed their entries were reliable, no quality assurance oversight was conducted on their work. An official from the Office of the Assistant Secretary of Defense for Readiness stated that periodic reviews are conducted on data entered in the Execution Management System but that these reviews are mainly focused on the execution rates of funds and not on whether the data entered produces quality information. Further, according to an official from the Office of the Chairman of the Joint Chiefs of Staff, the checks they perform on the data entered in the Execution Management System are similar to those conducted by the Office of the Assistant Secretary of Defense for Readiness, in that they are focused on whether or not monthly expenditures have been entered into the system in order to ensure that monthly benchmarks are met and less on whether or not the data entered produces quality information. According to officials from two of the four combatant commands we visited, sometimes they receive phone calls from the Office of the Chairman of the Joint Chiefs of Staff or the Office of the Assistant Secretary of Defense for Readiness to validate certain data entries because they seemed erroneous based on an informal review. However, no officials at the combatant commands we visited, the Office of the Chairman for the Joint Chiefs of Staff, or the Office of the Assistant Secretary of Defense for Readiness could demonstrate systemic processes for ensuring that the Execution Management System produced quality information. The absence of quality assurance processes can affect the quality of the information produced by the system that DOD uses to determine its most recent execution rates and defend the Joint Exercise Program’s budget. As previously discussed, the combatant commands are not following guidance requiring them to upload supporting documentation, and DOD lacks effective internal controls to help ensure the reliability of the data in the system. DOD officials acknowledged the issues we identified regarding inadequate supporting documentation and data reliability within the Execution Management System. A senior DOD official from the Office of the Assistant Secretary of Defense, Readiness (Resources) stated that DOD plans to address these control weaknesses with respect to the Combatant Commander Exercise Engagement and Training Transformation account for the Joint Exercise Program as part of its implementation of DOD’s FIAR Guidance beginning in fiscal year 2018 to ensure that the account is audit ready. DOD established the FIAR Plan as its strategic plan and management tool for guiding, monitoring, and reporting on the department’s ongoing financial management improvement efforts and for communicating the department’s approach to addressing its financial management weaknesses and achieving financial statement audit readiness. To implement the FIAR Plan, the DOD Comptroller issued the FIAR Guidance, which provides a standard methodology for DOD components to follow to assess their financial management processes and controls and to develop and implement financial improvement plans. These plans, in turn, are intended to provide a framework for planning, executing, and tracking essential steps and related supporting documentation needed to achieve auditability. We believe that if DOD appropriately follows the steps outlined in FIAR guidance when executing the Combatant Commander Exercise Engagement and Training Transformation account, it may help improve the quality of funds execution data from this account and make the account audit ready. However, as previously stated, FIAR guidance will not be implemented with the Joint Exercise Program until fiscal year 2018 and the effectiveness of the guidance cannot be fully determined until after that time. In the meanwhile, according to a senior DOD official, DOD plans to continue using the Execution Management System which is intended to capture the most recent data for the Joint Exercise Program and inform management decision-making regarding joint exercise investments. Without ensuring the required supporting documentation is uploaded and implementing effective internal controls to ensure that data entered in the Execution Management System produces quality information, DOD and other key decision makers may not have the correct financial execution information to defend the Joint Exercise Program’s budget. DOD has developed a body of guidance for the Joint Exercise Program. In addition, DOD has implemented an approach to develop performance measures to assess the effectiveness of the Joint Exercise Program. Further, DOD uses JTIMS and the Execution Management System to manage the Joint Exercise Program. JTIMS is the system of record for executing the Joint Exercise Program and the Office of the Chairman of the Joint Chiefs of Staff officials developed a user guide intended to help bring more standardization to the system, thereby making the information more useful to other combatant commands. The Execution Management System is used to oversee and report on most recent execution performance data for Joint Exercise Program funding. However, not all of the combatant commands were following guidance requiring them to upload supporting documentation, making it difficult for DOD to have oversight on expenditures for the Joint Exercise Program. Finally, DOD and the combatant commands lack systemic processes for ensuring that the Execution Management System produces quality information. Without ensuring that supporting documentation is uploaded and implementing effective internal controls to ensure the completeness and accuracy of financial information captured for the Joint Exercise Program, DOD and other key decision makers may not have the correct financial information to defend the Joint Exercise Program’s budget. To better ensure quality financial execution information is available to guide the Joint Exercise Program, we recommend that the Secretary of Defense direct the Office of the Assistant Secretary of Defense for Readiness to take the following two actions: direct the combatant commanders to take steps to comply with current Execution Management System guidance to upload supporting documentation that is reconcilable to funds executed from the Combatant Commanders Exercise Engagement and Training Transformation account; and as the department implements financial improvement plans in accordance with the FIAR guidance, it should include specific internal control steps and procedures to address and ensure the completeness and accuracy of information captured for the Joint Exercise Program’s Combatant Commanders Exercise Engagement and Training Transformation account. We provided a draft of this report to DOD for review and comment. In its written comments, which are summarized below and reprinted in appendix IV, DOD partially concurred with both recommendations. DOD also provided technical comments, which we incorporated as appropriate. DOD partially concurred with our recommendation to direct the combatant commanders to take steps to comply with current Execution Management System guidance to upload supporting documentation that is reconcilable to funds executed from the Combatant Commanders Exercise Engagement and Training Transformation account. In its comments, DOD stated that the Execution Management System is not a system of record but rather a “desk-side” support tool that relies on manual inputs and uploads and that the reconciliation of obligation and execution related to Joint Exercise Program funding occurs elsewhere. DOD further noted that the Office of the Assistant Secretary of Defense for Readiness (OASD(R)) issued guidance and routinely reinforces the best practices use of the Execution Management System tool to ensure it produces quality information. Lastly, DOD noted in its comments that that it may not continue using the Execution Management System beyond fiscal year 2017. We recognize that the Execution Management System is not a system of record. However, as we also note in the report, it is a tool used by DOD to make decisions regarding the Joint Exercise Program because the Defense Finance and Accounting Services (DFAS) Accounting Report Monthly 1002, the system of record, lags behind. Additionally, we acknowledge that DOD issued guidance for the Execution Management System in April 2016, but our work found that the guidance was not routinely reinforced. For example, as we identified in our report, two of the four combatant commands we visited had not, in fact, uploaded supporting documentation, as required by the Execution Management System guidance, for fiscal years 2013-2016. While the Execution Management System may not be funded beyond fiscal year 2017, we continue to believe that as long as the Execution Management System remains in use and for the reasons discussed in the report, combatant commanders should take the necessary steps to comply with existing guidance that requires the uploading of supporting documentation into the Execution Management System so that when DOD managers make decisions regarding the Joint Exercise Program funding, they use information from a financial data system that is reconcilable and auditable. DOD partially concurred with our recommendation that states that as the department implements financial improvement plans in accordance with the FIAR guidance, it should include specific internal control steps and procedures to address and ensure the completeness and accuracy of information captured for the Joint Exercise Program’s Combatant Commanders Exercise Engagement and Training Transformation account. In its comments, DOD described OASD(R) as having a supporting role in the execution of FIAR plans, which are implemented by Washington Headquarters Services and the Office of Secretary of Defense-Comptroller, and that these agencies provide specific internal controls, processes and procedures for ensuring completeness and accuracy of obligation and execution data. DOD also stated that the Execution Management System is not a component of FIAR and may not be funded after fiscal year 2017, and that moving toward audit readiness, necessary steps and procedures will be put into place to strengthen auditability. As we stated in the report, the FIAR guidance provides a standard methodology and framework for assessing and developing a system of internal controls to achieve auditability. However, as we recommended, DOD still needs to implement specific internal control steps and procedures as it implements this guidance to ensure the completeness and accuracy of the Joint Exercise Program’s Combatant Commanders Exercise Engagement and Training Transformation account’s financial information. Further, as we reported the FIAR guidance will not be implemented in the Joint Exercise Program until fiscal year 2018 and the effectiveness of the guidance cannot be fully determined until after that time. Accordingly, we continue to believe that the recommendation remains valid. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense; the Under Secretary of Defense for Personnel and Readiness; the Chairman of the Joint Chiefs of Staff; and the Commanders of U.S. Northern Command, U.S. Pacific Command, U.S. Strategic Command, and U.S. Transportation Command. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5431 or russellc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. This report (1) describes guidance the Department of Defense (DOD) has developed for its Joint Exercise Program and DOD’s approach to assess the effectiveness of the program and (2) evaluates the extent to which DOD uses two key information systems—the Joint Training Information Management System (JTIMS) and the Execution Management System–to manage the Joint Exercise Program. DODD 1322.18, Military Training (January 13, 2009) CJCSI 3500.01H, Joint Training Policy for the Armed Forces of the United States (April 25, 2014) CJCSM 3150.25A, Joint Lessons Learned Program (September 12, 2014) CJCSN 3500.01, 2015-2018 Chairman’s Joint Training Guidance (October 30, 2014) CJCSM 3500.03E, Joint Training Manual for the Armed Forces of the United States (April 20, 2015) CJCS Guide 3501, The Joint Training System: A Guide for Senior Leaders (May 5, 2015) CJCSM 3511.01H, Joint Training Resources for the Armed Forces of the United States (May 26, 2015) CJCSI 3150.25F, Joint Lessons Learned Program (June 26, 2015) NORAD and NORTHCOM, Joint Training System (JTS) Handbook (May 1, 2013) NORAD and NORTHCOM Instruction 16-166, Lessons Learned Program and Corrective Action Program (September 19, 2013) NORAD and NORTHCOM, JTPs, (December 10, 2015) PACOM Instruction 0509.1, Joint Lessons Learned and Issue Resolution Program (April 7, 2010) PACOM Instruction 0508.12, Joint Training Enterprise in U.S. Pacific Command (October 15, 2012) SI 508-09, Exercise Program (May 3, 2013) SI 509-01, After Action, Issue Solution/Resolution and Lessons Learned Program, October 14, 2015) SI 508-03, JTIMS Procedures (November 8, 2015) USTRANSCOM Pamphlet 38-1, Organization and Functions (May 1, 2008) USTRANSCOM Instruction 36-13, Training, Education, and Professional Development Program (May 23, 2013) USTRANSCOM Instruction 36-36, Joint Training and Education Program (August 29, 2014) USTRANSCOM Instruction 10-14, Joint Lessons Learned Program (November 9, 2015) We judgmentally selected these combatant commands for our site visits to achieve a mix of geographical and functional commands, as well as the funds that had been apportioned to the combatant commands in fiscal year 2016 from the Combatant Commanders Exercise Engagement and Training Transformation account, size of command, and location. We reviewed DOD’s approach to make performance measures specific, measurable, achievable, realistic, and time-phased (commonly referred to as the SMART rubric) to assess the return on investment for the Joint Exercise Program. In addition, we reviewed performance measures reportedly used to assess the ability of the training audience to accomplish training objectives for exercises, as well as measures used to assess the return on investment for conducting an exercise. We also reviewed performance documentation and information captured in JTIMS, as well as a nongeneralizable sample of commander summary reports or after-action reports from seven combatant command joint exercises to understand the content of these reports. Finally, we interviewed senior officials from the Office of the Assistant Secretary of Defense for Readiness, including the Joint Assessment and Enabling Capability office, the Office of the Chairman of the Joint Chiefs of Staff, as well as officials from the four selected combatant commands. We did not review or evaluate the quality of any assessments that DOD has conducted for its joint exercises. We reviewed the assessments to the extent that it was possible to ensure that an assessment process existed. To evaluate DOD’s use of JTIMS and the Execution Management System to manage the Joint Exercise Program, we reviewed guidance for JTIMS and the Execution Management System. In addition, we observed data associated with a nongeneralizable sample of joint exercises maintained in JTIMS. We also reviewed and analyzed a nongeneralizable sample of cumulative financial data and supporting documentation, if any, entered by combatant command users in the Execution Management System for the Joint Exercise Program during fiscal years 2013-16 to examine the internal controls that were in place. We reviewed a nongeneralizable sample of supporting documentation uploaded into the Execution Management System for fiscal years 2014 through 2016 to make a determination about compliance with guidance issued by DOD for the Execution Management System and the Standards for Internal Control in the Federal Government. Further, we compared individual transactions reported in the supporting documentation with the corresponding cumulative data entered into the system. We also reviewed the FIAR plan— DOD’s strategic plan and management tool for guiding, monitoring, and reporting on the department’s ongoing financial management improvement efforts and for communicating the department’s approach to addressing its financial management weaknesses and achieving financial statement audit readiness— and guidance. Additionally, we spoke with cognizant officials from the four combatant commands we visited, the Office of the Chairman of the Joint Chiefs of Staff, and the Office of the Assistant Secretary of Defense for Readiness about the systems used to execute and manage the Joint Exercise Program. Further, we attended sessions at the 3-day Annual Review for the Combatant Commanders Exercise Engagement and Training Transformation Enterprise on the budget for fiscal years 2018 through 2022. We attended sessions that were most pertinent to this engagement. For example, since the services were not included in the scope of our review, we did not attend their sessions. We conducted site visits to collect testimonial and documentary evidence about DOD’s Joint Exercise Program at the following locations: Cost Assessment and Program Evaluation Office, Arlington, Virginia Force Readiness and Training in the Office of the Assistant Secretary of Defense for Readiness, Arlington, Virginia Joint Assessment and Enabling Capability office, Alexandria, Virginia Joint Staff (J7), Arlington, Virginia Joint Staff (J7) Suffolk, Virginia U.S. Northern Command, Peterson Air Force Base, Colorado U.S. Pacific Command, Camp H. M. Smith, Hawaii U.S. Strategic Command, Offutt Air Force Base, Nebraska U.S. Transportation Command, Scott Air Force Base, Illinois We conducted this performance audit from June 2015 to February 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Guidance from the Office of the Chairman of the Joint Chiefs of Staff outlines the process that is used by the combatant commands to develop joint training programs, plan and execute joint training, and assess training for the Department of Defense’s (DOD) Joint Exercise Program. This process, referred to in the guidance as the Joint Training System, is characterized as an integrated, requirements-based, four-phased methodology used to align the Joint Training Strategy with assigned missions to produce trained and ready individuals, staff, and units. According to the guidance, the Joint Training System has four phases through which the combatant commands execute the Joint Exercise Program. Phase I—Requirements. During this phase, an ordered listing of tasks is developed describing the armed force’s ability to perform activities or processes that combatant commanders require to execute their assigned missions. This listing is referred to as the Universal Joint Task List and it provides a common language to describe warfighting requirements for combatant commanders. From this list, the most essential mission capability tasks—mission-essential tasks—are identified by the combatant commander. Using the commander’s criteria, mission-essential tasks are prioritized to form the Joint/Agency Mission-Essential Task List. In addition to combatant commanders’ priority, key documents pertinent to U.S. national strategy, such as the Unified Command Plan, Guidance for Employment of the Force, and other joint doctrine, are analyzed to determine the most essential mission capability requirements for the combatant command. The Joint/Agency Mission-Essential Task List provides the foundation for deriving joint training requirements used to develop Joint Training Plans and training and exercise inputs to theater campaign plans. Training requirements are derived from training proficiency assessments, mission training assessments, and lessons learned that result from the Phase IV (Assessment) of the Joint Training System. Phase II—Plans. The plans phase is initiated by conducting an assessment of current capability against the Joint Mission-Essential Task List and relevant lessons learned to identify gaps in training. To address those gaps, the Joint Training Plan is established and identifies who is to be trained; what they will be trained in; what the training objectives are; and when, where, and how the training will occur. Joint Training Plans, along with training and exercise inputs into theater campaign plans, are developed, coordinated, and published in the Joint Training Information Management System (JTIMS) to identify a commander’s training guidance, audiences, objectives, events, and support resources, and to identify the coordination needed to attain the required levels of training proficiency. Phase III—Execution. During this phase, events planned in the Joint Training Plan are conducted and the training audience’s performance objectives are observed and evaluated. Joint training events are developed and executed using the five-stage Joint Event Life Cycle methodology captured and reviewed in JTIMS. Task Performance Observations—which identify whether the training audience achieved the stated level of performance to the standards specified in the training objectives—and the Training Proficiency Evaluations for each training objective associated with the training event are also captured in JTIMS. Further, facilitated after-action reports are developed to highlight potential issues or best practices to support the assessments in Phase IV (Assessment). Validated observations from the training event are exported into JTIMS. Phase IV—Assessment. During this phase, leadership within the combatant command determines which organizations within the command are able to perform at the level required to meet the task standards and which missions the command is trained to accomplish. Assessments are a commander’s responsibility. To complete Task Performance Assessments for each task, commanders consider Task Performance Evaluations, lesson learned, and personal observations of the joint training exercise. An assessment ranking of trained, partially trained, or untrained is assigned to each task listed under a training objective. JTIMS supports the assessment of joint training by automating the ability of joint organizations to produce Task Performance Assessments. The Task Performance Assessments are analyzed to create the mission training assessment that is provided to a combatant commander on a monthly basis. The mission training assessment is on how well the command can execute its assigned missions. These training assessments provide input into the next training cycle. Lessons learned, after-action reports, and issues requiring resolution outside of the command are identified during this phase. Combatant command officials we visited stated that, in accordance with guidance from the Chairman of the Joint Chiefs of Staff, they used the Joint Training System as the process for conducting training assessments of individual joint exercises to determine each command’s overall readiness to perform command missions. These assessments occur during Phases III (Execution) and IV (Assessment) of the Joint Training System. During Phase III, for example, command trainers collect task performance observations for each training objective identified in the Joint Training Information Management System (JTIMS). These task performance observations identify whether the individuals and units participating in the training exercise achieved the level of performance stated in standards specified in the training objectives. Training proficiency evaluations are conducted for each training objective associated with the exercise. During Phase IV, combatant commanders consider the proficiency evaluations, as well as after-action and commander summary reports, to determine a combatant command’s ability to perform assigned missions at the minimum acceptable level under a specified set of conditions. According to an official from the Joint Assessment and Enabling Capability office, a subordinate office to the Office of the Assistant Secretary of Defense for Readiness that provides strategic- level assessments of joint training and joint training enablers throughout the Department of Defense (DOD), including to combatant commands, performance measures that are specific, measurable, achievable, realistic, and time-phased (commonly referred to as the SMART rubric) are used to assess the Joint Exercise Program. DOD officials stated that developing performance measures for joint exercises has not been an easy task and that they are constantly working to improve their performance measures. Specifically, in an effort to develop, improve, and provide quality assurance for specific performance measures, the Joint Assessment and Enabling Capability office works with the combatant commands to ensure that they are using the right measures to evaluate the training audience’s ability to perform tasks to specific standards. In addition, the Joint Assessment and Enabling Capability office hosts monthly meetings with combatant command stakeholders to discuss assessment topics, including performance measures. Additionally, the Joint Assessment and Enabling Capability office hosts at least one working group meeting at the annual worldwide joint training conference to conduct face-to-face discussions and reviews of assessment-related tasks for joint training. According to an annual report from the Joint Staff Director for Joint Force Development, the Joint Assessment and Enabling Capability office is available to assist combatant command stakeholders with assessment-related tasks for the Joint Exercise Program, as requested. Guy A. LoFaro, Assistant Director; Patricia Donahue; Pamela Nicole Harris; Amie Lesser; Sabrina Streagle; Sonja S. Ware; and Cheryl A. Weissman made key contributions to this report. Civil Support: DOD Needs to Clarify Its Roles and Responsibilities for Defense Support of Civil Authorities during Cyber Incidents. GAO-16-332. Washington, D.C: April, 4, 2016. Military Base Realignments and Closures: More Guidance and Information Needed to Take Advantage of Opportunities to Consolidate Training. GAO-16-45. Washington, D.C.: February 18, 2016. Operational Contract Support: Actions Needed to Enhance the Collection, Integration, and Sharing of Lessons Learned. GAO-15-243. Washington, D.C.: March 16, 2015. Defense Headquarters: DOD Needs to Reevaluate Its Approach for Managing Resources Devoted to the Functional Combatant Commands. GAO-14-439. Washington, D.C.: June 26, 2014. Defense Headquarters: DOD Needs to Periodically Review and Improve Visibility of Combatant Commands’ Resources. GAO-13-293. Washington, D.C.: May 15, 2013. Defense Management: Perspectives on the Involvement of the Combatant Commands in the Development of Joint Requirements. GAO-11-527R. Washington, D.C.: May 20, 2011. Homeland Defense: U.S. Northern Command Has a Strong Exercise Program, but Involvement of Interagency Partners and States Can Be Improved. GAO-09-849. Washington, D.C.: September 9, 2009. National Preparedness: FEMA Has Made Progress, but Needs to Complete and Integrate Planning, Exercise, and Assessment Efforts. GAO-09-369. Washington, D.C.: April 30, 2009. Homeland Defense: Steps Have Been Taken to Improve U.S. Northern Command’s Coordination with States and the National Guard Bureau, but Gaps Remain. GAO-08-252. Washington, D.C.: April 16, 2008. Homeland Defense: U.S. Northern Command Has Made Progress but Needs to Address Force Allocation, Readiness Tracking Gaps, and Other Issues. GAO-08-251. Washington, D.C.: April 16, 2008. Homeland Security: Observations on DHS and FEMA Efforts to Prepare for and Respond to Major and Catastrophic Disasters and Address Related Recommendations and Legislation. GAO-07-835T. Washington, D.C.: May 15, 2007. Military Training: Management Actions Needed to Enhance DOD’s Investment in the Joint National Training Capability. GAO-06-802: Washington, D.C.: August 11, 2006. Military Training: Actions Needed to Enhance DOD’s Program to Transform Joint Training. GAO-05-548: Washington, D.C.: June 21, 2005.
The Joint Exercise Program is the principal means for combatant commanders to maintain trained and ready forces, exercise contingency and theater security cooperation plans, and conduct joint and multinational training exercises. These exercises are primarily aimed at developing the skills needed by U.S. forces to operate in a joint environment and can also help build partner-nation capacity and strengthen alliances. House Report 114-102 included a provision for GAO to review DOD's Joint Exercise Program. This report (1) describes guidance DOD has developed for its Joint Exercise Program and DOD's approach to assess the effectiveness of the program and (2) evaluates the extent to which DOD uses two information technology systems to manage the program. GAO observed data in JTIMS and analyzed fiscal years 2014-16 financial data and supporting documentation in the Execution Management System. The Department of Defense (DOD) has developed a body of guidance for the Joint Exercise Program and has implemented an approach to assess the effectiveness of the program. In addition to the body of guidance for the program, DOD is working to update a key guidance document for military training in accordance with a congressional requirement. DOD's approach to assess the effectiveness of the Joint Exercise Program is aimed at ensuring that its performance measures are specific, measurable, achievable, realistic, and time-phased (commonly referred to as the SMART rubric). The Joint Assessment and Enabling Capability office reviews the performance measures created by the combatant commands against this rubric and provides input and coaching on improving the measures through an ongoing and collaborative process. DOD uses two key information technology systems—the Joint Training Information Management System (JTIMS) and the Execution Management System—to manage the execution of the Joint Exercise Program, but does not have assurance that funding execution data in the Execution Management System are reliable. JTIMS is the system of record for the Joint Exercise Program that combatant commanders use to plan and manage their joint training exercises. GAO observed significant variation in the type and quality of information entered in JTIMS. Combatant command and Joint Staff officials stated that information in JTIMS lacked consistency in the level of detail provided, sometimes making it difficult to coordinate training with other combatant commands or extract pertinent information about exercises from the system that would be helpful in planning other exercises. Consequently, to help improve the consistency and standardization of information across combatant commands, the Joint Staff published a user guide for JTIMS. Regarding the Execution Management System, a web-based database DOD uses to track the most recent funding execution data for the Joint Exercise Program, GAO found that DOD does not have assurance that the system produces quality information because supporting documentation is not consistently uploaded into the system and, when it is uploaded, it is not reconcilable to the data entered there. Only U.S. Strategic Command and U.S. Northern Command uploaded supporting documentation for fiscal years 2013-16 as required by the Execution Management System guidance. Reviewing a nongeneralizable sample of uploaded supporting documentation for fiscal years 2014-16, GAO found that the sum of the individual expenditures reported in supporting documentation did not match corresponding total expenditures entered in the system for any of the four combatant commands included in GAO's review. Further, the four combatant commands GAO visited, the Office of the Chairman of the Joint Chiefs of Staff, and the Office of the Assistant Secretary of Defense for Readiness had not implemented effective internal controls similar to those identified in the Standards for Internal Control in the Federal Government to ensure the completeness and accuracy of financial information captured for the Joint Exercise Program. Without such internal controls, DOD and other key decision makers may not have the financial information of sufficient quality to defend the Joint Exercise Program's budget. GAO recommends that DOD comply with current guidance to upload supporting documentation in the Execution Management System and implement effective internal controls to ensure the completeness and accuracy of financial information. DOD partially concurred with both recommendations, noting existing controls in other related systems of record. GAO believes the recommendations remain valid, as discussed in this report.
Ex-Im Bank is an independent U.S. government agency whose mission is to finance the export of U.S. goods and services overseas and to support U.S. jobs, particularly when private sector lenders are unable or unwilling to accept the risk. Ex-Im Bank provides medium- and long-term loans and guarantees, export credit insurance, and working capital guarantees. Under the loan and guarantee program, Ex-Im Bank guarantees the repayment of loans or makes loans to foreign purchasers of U.S. goods and services. The export credit insurance program provides protection to U.S. exporters against the risks of nonpayment by foreign buyers for political or commercial reasons. The working capital guarantee program provides U.S. exporters with short-term loans and the necessary working capital to pay for raw materials, labor, and overhead to produce goods or provide services for export. Energy transactions represented a major component of transactions financed by Ex-Im Bank during the 1990s. The values financed for energy sector transactions compared to total Ex-Im Bank financing for loans and guarantees averaged around 27 percent during this period and represented as much as 47 percent of all Ex-Im Bank financing in 1995. Ex-Im Bank categorizes energy sector transactions according to the end-use industrial activity. That is, U.S. exports of services and equipment used in energy sector projects are considered energy transactions. Energy sector transactions are divided into subsectors that include fossil fuels, nuclear energy, and renewable energy. Examples of exports financed under fossil fuel projects include engineering services, drilling equipment, and turbines. Examples of renewable energy products or services financed include heat exchangers for geothermal power plants, solar electric modules for solar power generation, and engineering services to design a hydroelectric dam. Ex-Im Bank defines renewable energy to include geothermal, hydroelectric, biomass, wind, and solar activities. The definition of renewable energy for different policy purposes is a subject of debate, especially regarding hydroelectric power because of concerns about potential environmental impacts of large dams. Of the $28 billion Ex-Im Bank provided in loans and guarantees for energy- related projects from 1990 to 2001, about 93 percent was used to finance fossil fuel projects. (See app. II for a discussion of trends in export credit insurance and working capital guarantees.) The number of fossil fuel projects financed each year dropped sharply during the early 1990s, but the values financed annually showed significant fluctuations with no clear trend. For renewable energy, there has been a small volume of overall activity during this period, with most of the financing provided primarily in 1994 when two large geothermal power plants were financed. Trends in final commitment applications submitted for energy sector projects largely mirror the trends in the number and values financed for energy sector projects because 90 percent of these applications were financed. The number of fossil fuel projects financed annually by Ex-Im Bank decreased significantly over the 1990s, while the values financed fluctuated substantially. (See fig. 1.) Ex-Im Bank financed 474 fossil fuel projects over the period, with the number falling from 91 in 1990 to 15 in 1999, before rising slightly in 2000 and 2001. The total value financed for fossil fuel projects over the period was about $25.7 billion, with annual values ranging from $546 million in 1999 to more than $3.6 billion in both 1993 and 1995. The average value financed per project increased significantly during the early 1990s, and ranged from $7 million in 1990 to more than $79 million in 1995. The types of fossil fuel projects Ex-Im Bank financed varied over the period. As shown in figure 2, during the early 1990s, extraction, transport, and processing projects such as oil and gas exploration and the development of oil and gas pipelines dominated Ex-Im Bank’s fossil fuel project financing in terms of values financed. In the mid-1990s, however, power production projects, such as power plants using natural gas, oil, and coal, received the most financing. Neither project type was particularly dominant from 1997 to 2000. Projects in Mexico received the largest share of fossil fuel financing during 1990 to 2001, at 16 percent, followed by projects in Venezuela and Algeria, at about 10 percent each. In terms of the numbers of projects, Algeria and Mexico received 43 percent of the total number financed over the 12-year period. Most of these were for small value loans and guarantees financed from 1990 to 1992. Appendix III shows Ex-Im Bank’s distribution of fossil fuel energy projects by total number and values financed to recipient countries. For renewable energy, a small number of projects were financed in most years, with the overall value of financing concentrated primarily in one year. As shown in figure 1, from 1990 to 1996, the number of renewable energy projects varied from two to six. Ex-Im Bank did not finance any renewable energy projects from 1997 to 1999, but did finance two renewable energy projects in 2000 and three in 2001. Overall, Ex-Im Bank financed 30 renewable energy projects from 1990 to 2001, accounting for about 6 percent of the total number of energy sector projects financed. Most projects financed between 1990 and 1996 were to construct hydroelectric and geothermal power plants. Of the projects receiving loans and guarantees in 2000 and 2001, three were for hydroelectric engineering services and two were for solar projects. Appendix IV identifies the renewable energy loans and guarantees financed from 1990 to 2001, including the project type, supplier, value financed, and country. The values financed for renewable energy projects varied dramatically during 1990 through 2001, with the majority of the financing provided in 1994. Overall, Ex-Im Bank financed renewable energy projects totaling $730 million from 1990 through 2001 or about 3 percent of all energy projects financed. Almost 60 percent of these funds were provided in 1994, when two large geothermal projects were financed in the Philippines for almost $395 million. As shown in figure 3, geothermal and hydroelectric projects represented 75 percent and 17 percent of the total value of financing provided for renewable energy projects, while solar, wind, and biomass projects combined accounted for about 8 percent of total financing. Trends in the number and value of final commitment applications submitted for energy sector projects closely track the trends for energy projects financed, because 90 percent of final commitment applications submitted were financed by Ex-Im Bank. While Ex-Im Bank offers two earlier types of applications—the letter of interest and preliminary commitment—the final commitment application is the only one required to obtain financing for a project and is the only one used consistently from 1990 to 2001. As shown in figure 4, the number of fossil fuel final commitment applications for loans and guarantees decreased significantly from 1990 to 2001, while the values of financing requested in these applications fluctuated greatly. For renewable energy, the application trends also mirrored those of the overall renewable energy projects financed, with the overall numbers remaining at low levels and the financed values concentrated primarily in 1994. Ex-Im Bank denies very few final applications and only a small percentage of applications are withdrawn or canceled. From 1990 through 2001, Ex-Im Bank records indicate that only 2 of the 577 energy sector applications were denied; both were fossil fuel projects. During this period, about 10 percent of the energy sector final applications for loans and guarantees were either withdrawn by the applicant or canceled by Ex-Im Bank because the applicant did not meet the requisite terms and conditions. Ex-Im Bank has not consistently reported to Congress on its efforts to meet the 1989 legislative financing target for renewable energy or its renewable energy promotion efforts. In reviewing Ex-Im Bank’s annual reports, we looked for basic information on renewable energy projects that would include the number of projects and values financed, the types of projects, and the value of renewable energy project financing relative to overall energy sector financing. Ex-Im Bank’s reporting to Congress was most complete for fiscal year 1990 when Ex-Im Bank provided a report in 1991 to the Committees on Appropriations with specific information regarding both Ex-Im Bank’s meeting the 5 percent renewable energy target and its marketing and promotional efforts for renewable energy. This report also provided specific information regarding values financed, types of projects financed, and an estimate for potential demand for future financing. Other than this one-time report to Congress, Ex-Im Bank has typically provided information about its renewable energy efforts in its annual report. During the period 1990 to 2001, Ex-Im Bank’s annual reports identified the percentage of renewable energy projects to the total energy projects financed in 3 years—1990, 1991, and 1994. Including all financing types— loans and guarantees, insurance, and working capital guarantees—Ex-Im Bank met the 5 percent target twice—1990 and 1994—and came close in 1996 when renewable energy projects accounted for 4.8 percent of the total values financed. (See fig. 5.) Ex-Im Bank’s annual reports since 1990 contained varying amounts of additional information regarding its efforts to promote renewable energy. Overall, Ex-Im Bank provided the most consistent reporting from fiscal years 1990 to 1994, which included the number of projects and values financed, types of projects, and countries where the projects were implemented. The 1995 and 1998 reports did not address renewable energy. Various factors have affected Ex-Im Bank’s renewable energy financing, including worldwide economic conditions and energy consumption patterns, financing challenges faced by diverse renewable energy suppliers, foreign government support of renewable energy sectors, and environmental concerns. Ex-Im Bank has not placed a priority on promoting renewable energy exports, but has addressed the sector through its general marketing efforts and its Environmental Exports Program. Ex-Im Bank established the Renewable Energy Exports Advisory Committee to help expand its support of U.S. renewable energy exporters in May 2002. Broad economic conditions and market trends are important to Ex-Im Bank’s overall financing and energy sector patterns. These include, for example, exchange rates and economic growth trends. While identifying the impacts of these factors is complex, macroeconomic factors have been identified as particularly important in the geothermal sector. According to industry representatives and analysts, the Asian financial crisis and subsequent economic and political turmoil in Southeast Asia was a key reason for a decline in construction of geothermal facilities in the region in the late 1990s. The relatively small share of most renewable resources in world energy consumption, due partly to cost disadvantages, is viewed as a key factor underlying the demand for Ex-Im Bank financing. According to Department of Energy estimates, in 1999 about 7 percent of world energy consumption was from hydroelectricity and 1 percent from other renewable sources. For energy used for electricity generation, hydroelectricity supplied 19 percent and other renewables 2 percent. A primary reason for this relatively small share of renewables is cost, according to government and industry assessments. While the costs of some renewable energy technologies have decreased, they have generally not been competitive with fossil fuels for most uses, according to these assessments. A related factor is that the feasibility of renewable energy projects often depends on environmental factors such as the location of rivers, geothermal heat sources, and wind supply. The renewable energy market is diverse, with sectors and firms varying in terms of key characteristics that could affect the demand for Ex-Im Bank financing. These characteristics include, for example, firm size and exporting experience, project risk, and payback periods. The geothermal sector includes large-scale power production and smaller-scale direct heating and agricultural uses. Project risk can be high with substantial exploration and development costs. The solar energy sector includes multinational producers of photovoltaics for export to electric utilities as well as producers of off-grid equipment that can include small-scale uses. U.S. wind energy suppliers include one firm producing for large-scale on- grid utility uses and a number of firms providing for smaller scale power generation. Representatives for different renewable energy sectors have cited various exporting challenges or financing needs, not necessarily under Ex-Im Bank’s control, including: Actual or perceived financial risk of renewable energy projects; For small businesses, lack of investment capital or contacts in export Lack of credit-worthy buyers for certain types of renewable energy projects, such as smaller scale projects in developing countries; Need in some sectors for longer repayment terms due to higher up-front Difficulty in understanding financing options and coordinating financing among exporters, buyers, financial institutions, sources of funding assistance, and local governments. Government support has been an important factor in the growth of renewable energy. Foreign government support, for example, is seen as critical to rapid growth in the international wind and photovoltaic markets. Several European countries and Japan have used various strategies and financial incentives for increasing renewable energy in their domestic markets. World photovoltaic shipments almost tripled between 1994 and 2000, due in part to subsidized programs in Europe and Japan.Similarly, the world wind energy market grew sharply between 1994 and 2001, due in part to government support and growth in Europe. The United States has had some production incentives and tax credits for renewable energy at the state and federal level but their impact has varied depending on amounts and certainty of initiatives. According to the Department of Energy, nonhydroelectric renewable electricity generation in the United States declined between 1993 and 1998. The U.S. domestic wind energy market did grow strongly in 2001, which analysts attribute in part to firms taking advantage of a federal production tax credit scheduled to expire at the end of 2003. Governments have provided official development assistance for renewable energy projects in developing countries, including concessional loans and grants. According to analysts and industry representatives, such assistance can in some cases yield advantages to donor country exporters. Links to exports are explicit in cases of tied aid, where trade-related concessional financing of public sector capital projects is conditional on the procurement of goods and services from the donor country. Many industrialized countries, including the United States, view tied aid as potentially trade-distorting and agreed in 1992 to limits on its use. Renewable energy projects are often exempt from international restrictions due to not being commercially viable. Ex-Im Bank has matched tied aid offers by other countries in some instances. From 1991 to 2001, Ex-Im Bank funded four tied aid projects for renewable energy.According to some renewable energy industry representatives, tied aid has not generally been viewed as a viable export financing option for U.S. renewable energy exporters because of the documentation requirements and the length of the process. Increased public concerns about the environmental and social impacts of large hydroelectric dams may have affected financing of hydroelectric projects, according to Ex-Im Bank and industry officials. Ex-Im Bank adopted environmental procedures and guidelines in February 1995,which provide for qualitative and quantitative assessments of air and water quality, management of hazardous and toxic materials and waste, cultural and ecological effects, and other factors. Environmental concerns regarding hydroelectric power plants were highlighted in 1996 when the Yangtze Three Gorges hydroelectric power plant was proposed by China. Although Ex-Im Bank was approached regarding financing, the project proceeded with financing from other sources and has continued to be controversial. Ex-Im Bank did not finance any hydroelectric projects from 1997 to 1999, but did finance engineering and architectural services for two hydroelectric projects in Turkey in 2000 and one in 2001. According to Ex-Im Bank officials and some environmental groups, issues regarding its financing activities in the hydroelectric sector illustrate a tension between increasing renewable energy financing and responding to environmental concerns. Ex-Im Bank has not focused on or allocated specific resources to promote the renewable energy sector. Instead, Ex-Im Bank has addressed this sector through its general marketing efforts and the Environmental Exports Program. With the exception of aircraft sales, Ex-Im Bank does not target its resources or marketing efforts toward specific industry sectors, according to senior Ex-Im Bank officials. Instead Ex-Im Bank’s business development officers are assigned geographic regions and are expected to promote all sectors, such as energy, telecommunications, and manufacturing equipment, within their respective regions. According to Ex-Im Bank officials, an environmental liaison officer was appointed in 1994 to focus exclusively on promoting and developing environmentally beneficial projects, which by definition include renewable energy projects. However, the individual in that position has been assigned other duties over time, and the official’s portfolio now includes responsibility for the South America region and the medical equipment sector. Several trade association and industry officials said this dilution of responsibility has affected Ex-Im Bank’s ability to effectively promote renewable energy exports. They stressed that having an experienced person dedicated specifically to renewable energy is critical to providing effective linkages among Ex-Im Bank, exporters, foreign buyers, financiers, and other U.S. government agencies. According to Ex-Im Bank officials, its efforts to promote small businesses have benefited some renewable energy exporters. In 2000, Congress required that not less than 10 percent of all Ex-Im Bank annual financing be provided to support small businesses. Ex-Im Bank officials said that the product typically best suited to meet the needs of renewable energy small businesses is short- or medium-term insurance. Of the nine renewable energy-related insurance policies underwritten by Ex-Im Bank since 1999, seven were provided to three small businesses. Although Ex-Im Bank has financed some renewable energy projects under its Environmental Exports Program, the program’s impact on Ex-Im Bank’s financing of renewable energy projects appears to be limited. Ex-Im Bank established the environmental exports program in 1994 to provide enhanced levels of support for a broad range of exports deemed environmentally beneficial. Of the $3.1 billion financed for environmentally beneficial projects from 1994 to 2001, about $457 million was provided to finance renewable energy projects—of which $333 million was financed in 1994. Meanwhile, fossil fuel projects deemed environmentally beneficial received just over $2 billion. Ex-Im Bank officials said they have not seen a notable increase in renewable energy applications or projects financed since the program was introduced. Although Ex-Im Bank provided $113 million for environmentally beneficial renewable energy projects in 1996, it did not finance other renewable energy projects again until 2000 and 2001 when it financed transactions totaling approximately $5 million and $6 million, respectively. Several Ex-Im Bank officials attributed this recent activity in the renewable energy sector to Ex-Im Bank’s focus on providing loans and short-term insurance to small businesses. Ex-Im Bank and renewable energy industry officials have acknowledged that Ex-Im Bank can do a better job of promoting their products and services to renewable energy sectors. Officials identified Ex-Im Bank’s establishment of a Renewable Energy Exports Advisory Committee in May 2002 as an effort to help the Bank expand its support of U.S. renewable energy exporters. Over the next 2 years, the advisory committee will focus on specific issues such as how Ex-Im Bank can modify its existing programs, what new financing products or changes to existing products should be considered, and how to improve its outreach to U.S. renewable energy exporters and foreign buyers. Congress has demonstrated a long-standing and continued interest in Ex-Im Bank’s efforts to promote the export of renewable energy products and services. While Ex-Im Bank has undertaken some efforts to increase its funding of renewable energy exports, they have been limited. This report highlights several factors and challenges to renewable energy exports. Some factors, such as cost disadvantages in many markets, are largely outside Ex-Im Bank’s control while others, such as product terms and the allocation and targeting of business development resources, represent areas in which Ex-Im Bank has some control. In addition, Ex-Im Bank’s renewable energy financing to date shows how a few large projects can account for the majority of financing in an area, and illustrates that significant small-scale renewable energy financing activity could take place with relatively low values financed. Ex-Im Bank’s renewable energy efforts can be measured and reported in various ways. In addition to information on the programs and initiatives undertaken to promote renewable energy, specific information about project financing would be helpful to Congress. Although Ex-Im Bank has provided specific funding information to Congress for some reporting periods, it has not provided this information consistently. Such information can help Congress better track and understand Ex-Im Bank’s efforts to promote renewable energy and identify emerging trends and challenges in financing renewable energy projects. In reporting on its renewable energy efforts under Ex-Im Bank’s 2002 reauthorization act, we recommend that the Chairman of the Export- Import Bank provide adequate information for Congress to assess these efforts and the types of challenges Ex-Im Bank faces. In addition to information on types of outreach and specific processes or programs to promote renewable energy exports, Ex-Im Bank should provide information on the types and amounts of financing actually provided, including the number and values financed for renewable energy transactions each year, and the specific renewable energy sectors to which the financing is provided. Ex-Im Bank provided written comments on a draft of this report, which are reprinted in appendix V. In its response, Ex-Im Bank reiterated as important a number of factors identified in the report as significant to the Bank’s energy sector financing trends, including broad economic and market trends. Ex-Im Bank also expressed the view that the report understates the Bank’s support of renewable energy sector exports. We believe that the report appropriately identifies both external and internal factors that have affected the Bank’s energy sector financing, and points out the difficulty of determining the specific impacts of various factors. Ex-Im Bank stated that in comparing its financing of renewable energy and fossil fuel exports, we should have included only the fossil fuel exports for power generation and excluded extraction, transportation, and processing projects, such as pipeline construction. Our analysis is based on energy sector project data provided to us by Ex-Im Bank, which included both categories of fossil-fuel related energy financing. We believe that comparing renewable energy sector financing to only a portion of fossil- fuel related financing would have been inappropriate for demonstrating overall financing trends. Ex-Im Bank did not comment on our recommendation that Ex-Im Bank’s future reporting to Congress on its renewable energy efforts include specific information on its financing of renewable energy projects. We are sending copies of this report to the appropriate congressional committees, and the Honorable Eduardo Aguirre, Vice Chairman, Export- Import Bank of the United States. Copies will also be made available to others upon request. In addition, this report is also available on GAO’s Web site at no charge at http://www.gao.gov. Please contact me at (202) 512-4347 if you or your staff has any questions concerning this report. Major contributors to this report are listed in appendix VI. In response to Chairman Bereuter’s request, we identified and assessed (1) trends in Ex-Im Bank’s financing of and applications for fossil fuel and renewable energy-related projects, (2) the extent of Ex-Im Bank’s reporting to Congress on its renewable energy efforts, and (3) key factors affecting Ex-Im Bank’s renewable energy sector financing. To meet these objectives, we analyzed a range of documents and interviewed policy and program officials from the Export-Import Bank as well as energy trade associations, private sector companies, think tanks, and nongovernmental organizations. To address the first objective, we obtained the cooperation of Ex-Im Bank’s Engineering and Environment Division staff in creating reports from two different databases—one for loans and guarantees and the other for insurance—to identify the number and value of energy-related transactions that Ex-Im Bank financed by each product type (loans and guarantees, insurance, and working capital guarantees) for fiscal years 1990 through 2001. The reports were further divided by sub sectors, which included fossil fuel extraction, transport and processing, fossil fuel power generation, renewable energy, and nuclear energy. Ex-Im Bank also provided similar reports for applications submitted but not supported by Ex-Im Bank for loans and guarantees by various sub sectors. Ex-Im Bank did not provide applications data for insurance or working capital guarantees. Applications data were reported in the fiscal years in which they were received, while project data were reported in the fiscal years in which they were financed. We analyzed these reports to identify trends in the number and values financed for energy sector projects as well as the number and value of energy sector applications submitted. We did not focus on nuclear energy projects because they are outside the scope of our request and comprise only a small percentage of Ex-Im Bank’s energy sector portfolio. The report, however, notes that nuclear energy projects account for the balance of energy sector projects financed when combined with fossil fuel and renewable energy projects. Ex-Im Bank officials noted concerns over the reliability and completeness of some of the data, particularly insurance transactions. Reliability issues occur because insurance transactions often include multi-buyer policies that cover many products and services. These policies may be in different sectors and would therefore be difficult to characterize under one sector code. Further, insurance underwriters code the transaction according to the principal product or service, not according to the project’s end-use, as the loans and guarantees division would do. Ex-Im Bank officials estimate that the insurance data provided are about 75 percent accurate but noted that increased accuracy would require the review of each policy – a large investment of time. Ex-Im Bank officials also note that insurance records prior to 1992 were not readily available We chose to focus our principal findings on the loans and guarantees programs because of these concerns and because loans and guarantees account for 89 percent of the value of energy sector projects financed by Ex-Im Bank. We discuss trends in the number and values financed for insurance and working capital guarantees in appendix II. We also focused on loans and guarantees because Ex-Im Bank provided data for both the applications submitted and projects financed for the period 1990 to 2001. We compared this data to data used in other Ex-Im Bank reports to assess its reliability and found them to be consistent. To address the second objective, we reviewed the 1989 legislation that established the Ex-Im Bank renewable energy-financing target and reporting requirement. We also reviewed Ex-Im Bank’s 2002 reauthorization act, which includes a reporting requirement for Ex-Im Bank’s renewable energy promotion efforts. To ascertain the extent to which Ex-Im Bank reported data to Congress regarding its renewable energy efforts, we analyzed Ex-Im Bank’s annual reports for fiscal years 1990 to 2001 and a 1991 report to the Committees on Appropriations. To determine the percentage of the value financed for renewable energy projects to the total energy sector, we analyzed the energy sector project reports provided by Ex-Im Bank for fiscal years 1990 to 2001. To address the third objective regarding factors that affected the increases and decreases in Ex-Im Bank’s energy sector financing, we analyzed reports on energy sector trends. We reviewed relevant Ex-Im Bank and GAO reports regarding tied aid provided by the United States and other foreign governments. To obtain industry perspective on the factors affecting trends, we discussed these issues with representatives from the various renewable energy trade associations including the American Wind Energy Association, Solar Energy Industries Association, U.S. Hydropower Council for International Development, Geothermal Energy Association, and U.S. Export Council on Energy Efficiency. We also interviewed officials from the International Rivers Network, Institute for Policy Studies, and several private sector renewable energy firms. To identify factors internal to Ex-Im Bank that affected energy sector trends, we analyzed Ex-Im Bank program data relating to its efforts to promote renewable energy, the Environmental Exports Program, and the Renewable Energy Exports Advisory Committee. We also interviewed policy and program officials from Ex-Im Bank to discuss the trends and factors. We conducted our review from December 2001 through September 2002 in accordance with generally accepted government auditing standards. While loans and guarantees have traditionally accounted for 89 percent of Ex-Im Bank’s energy sector portfolio, export credit insurance and working capital guarantees represented about 10 percent and less than 1 percent of the values financed, respectively. The values of export credit insurance for fossil fuel projects fluctuated, while the number of fossil fuel transactions declined. Conversely, the renewable energy sector showed a slight increase in both the value financed and the number of insurance transactions during this period. Meanwhile, trends for the value of working capital guarantees for fossil fuels increased incrementally, while the number of transactions varied. Only two renewable energy projects received working capital guarantees during this period. Ex-Im Bank provided insurance for 281 energy sector projects totaling $2.9 billion from 1992 through 2001 under the export credit insurance program. As shown in figure 6, the values financed for fossil fuel energy projects varied from a high of $749 million in 1992 to lows of $45 million and $52 million in 1997 and 2001, respectively. Meanwhile, the trend in the number of insurance transactions financed for fossil fuel projects declined steadily by more than 50 percent—from 39 to 18 fossil fuel transactions— from 1992 through 2001. While trends in the number and values financed for renewable energy projects increased during this period for export credit insurance, the overall financing provided and numbers financed for export credit insurance was $3.5 million for 12 transactions. Ex-Im Bank did not finance any renewable energy insurance transactions in 4 of the 10 years analyzed, but the value financed increased from $170,850 in 1994 to $711,000 in 2001. A peak was noted in 1998 as Ex-Im Bank financed over $1 million in insurance transactions. Similarly, the number of renewable energy projects has increased from zero in 1992 to five in 2001, reflecting Ex-Im Bank’s focus on using the insurance program to reach small businesses, including renewable energy businesses. Ex-Im Bank financed working capital guarantees for 64 energy sector projects totaling over $120 million from 1992 through 2001. As shown in figure 7, the financing provided for working capital guarantees for fossil fuel projects decreased to zero in 1994 but increased incrementally until 2000. The values financed doubled in 2001—from $14 million in 2000 to about $28 million. Meanwhile, the number of working capital guarantees provided for fossil fuel projects during the period increased—with some variations from year to year. The number of fossil fuel projects financed ranged from 0 in 1994 to 10 in 1997 and 1999. Over 80 percent of the fossil fuel working capital guarantees were provided after 1995. Only two renewable energy projects were financed through the working capital guarantee program when Ex-Im Bank provided $8.9 million to finance two wind energy projects in 1996. International Drilling Integrated Power Corporation M/G Electric, Inc. Ormat, Inc. Ormat, Inc. Caterpillar, Inc. Siemens Solar Industries Geothermal Power Company, Inc. Mid American Holdings Company Mid American Holdings Company Integrated C-E Services, Inc. Sargent and Lundy, LLC Voith Hydro, Inc. National-Oilwell, Inc. Voith Hydro, Inc. Enron Wind Systems, Inc. Enron Wind Systems, Inc. Enron Wind Systems, Inc. BP Solarex Ormat, Inc. Kaiser Engineers & Constructors, Inc. Washington Group International, Inc. In addition to those named above, Nathan A. Morris, Lynn Cothern, and Ernie Jackson made key contributions to this report. Export Promotion: Mixed Progress in Achieving a Governmentwide Strategy (GAO-02-850, Sept. 4, 2002). Export Promotion: Export-Import Bank and Treasury Differ in Their Approaches to Using Tied Aid (GAO-02-741, June 28, 2002). Export Promotion: Government Agencies Should Combine Small Business Export Training Programs (GAO-01-1023, Sept. 21, 2001). Renewable Energy: DOE’s Funding and Markets for Wind Energy and Solar Cell Technologies (GAO/RCED-99-130, May 14, 1999). U.S. Export-Import Bank’s Asian Financial Exposure (GAO/NSIAD-98- 150R, Apr. 17, 1998). Export Finance: Federal Efforts to Support Working Capital Needs of Small Business (GAO/NSIAD-97-20, Feb. 13, 1997). Export-Import Bank: Reauthorization Issues (GAO/T-NSIAD-97-147, Apr. 29, 1997). Export-Import Bank: Options for Achieving Possible Budget Reductions (GAO/NSIAD-97-7, Dec. 20, 1996). Export Finance: Comparative Analysis of U.S. and European Union Export Credit Agencies (GAO/GGD-96-1, Oct. 24, 1995). Export Finance: The Role of the U.S. Export-Import Bank (GAO/GGD-93- 39, Dec. 23, 1992). Export Promotion: Federal Efforts to Increase Exports of Renewable Energy Technologies (GAO/GGD-93-29, Dec. 30, 1992). The U.S. Export-Import Bank: The Bank Provides Direct and Indirect Assistance to Small Businesses (GAO/GGD-92-105, Aug. 21, 1992).
From 1990 through 2001, the Export-Import Bank (Ex-Im Bank) of the United States provided export financing commitments totaling $31 billion to promote the export of U.S. goods and services for use in the energy sector. The energy sector is divided into fossil fuel, renewable, and nuclear energy. Financing is provided through a range of products, including loans and guarantees, export credit insurance, and working capital guarantees. Of the $28 billion Ex-Im Bank provided in loans and guarantees for energy-related projects from 1990 to 2001, 93 percent was used to finance fossil fuel projects, and 3 percent was for renewable energy projects. Trends in applications for fossil fuel and renewable energy projects largely mirrored trends in the energy projects financed because 90 percent of applications submitted were financed. Since 1990, Ex-Im Bank has not consistently provided information about its renewable energy program to Congress; its 1995 and 1998 annual reports did not address renewable energy. Ex-Im Bank's energy portfolio is affected by broad factors such as worldwide market conditions and to some degree by its policies, promotion efforts, and programs. The relatively small share of renewable energy in worldwide energy consumption, due in part to cost factors, is a key factor. Although Ex-Im Bank has undertaken some efforts to promote renewable energy, it has not focused specifically on this sector.
NCLBA reauthorized the Elementary and Secondary Education Act of 1965 (ESEA) and built upon accountability requirements created under a previous reauthorization, the Improving America’s Schools Act of 1994 (IASA). Under ESEA, as amended, Congress sought to improve student learning by incorporating academic standards and assessments in the requirements placed on states. Academic standards, which describe what students should know and be able to do at different grade levels in different subjects, help guide school systems in their choice of curriculum and help teachers plan for classroom instruction. Assessments, which states use to measure student progress in achieving the standards, are required to be administered by states. NCLBA further strengthened some of the accountability requirements contained in ESEA, as amended. Specifically, NCLBA’s accountability provisions require states to develop education plans that establish academic standards and performance goals for schools to meet AYP and lead to 100 percent of their students being proficient in reading, math, and science by 2014. This proficiency must be assessed annually in reading and math in grades 3 through 8 and periodically in science, whereas assessments were required less frequently under the IASA. Under NCLBA, schools’ assessment data generally must be disaggregated to assess progress toward state proficiency targets for students in certain designated groups, including low-income students, minority students, students with disabilities, and those with limited English proficiency. Each of these groups must make AYP in order for the school to make AYP. Schools that fail to make AYP for 2 or more consecutive years are required to implement various improvement measures identified in NCLBA, and these measures are more extensive than those required under IASA. Education, which has responsibility for general oversight of NCLBA, reviews and approves state plans for meeting AYP requirements. As we have previously reported, Education had approved all states’ plans—fully or conditionally—by June 2003. NCLBA also recognizes the role of teachers in providing a quality education by requiring states to ensure that all teachers in core academic subjects are “highly qualified.” Under this requirement, teachers generally must have a bachelor’s degree, be fully certified, and demonstrate their knowledge of the subjects they teach. Previously, there were no specific requirements regarding teacher quality under ESEA, as amended. According to our analysis of NLS-NCLB data from Education, most principals reported their schools focused on multiple instructional practices in their voluntary school improvement efforts. These strategies were used more often at schools with higher proportions of low-income students (“high-poverty schools”) and schools with higher proportions of minority students (“high-minority schools”) than at schools with lower proportions of low-income students (“low-poverty schools”) and schools with lower proportions of minority students (“low-minority schools”). Likewise, the survey of math teachers in California, Georgia, and Pennsylvania indicates teachers were using many different instructional practices in response to their state tests, and teachers at high-poverty and high-minority schools were more likely than teachers at low-poverty and low-minority schools to have been increasing their use of some of these practices. Some researchers we spoke with suggested that differences in the use of these instructional practices exist because schools with low- poverty or low-minority student populations might generally be meeting accountability standards and, therefore, would need to try these strategies less frequently. According to nationally representative data from Education’s NLS-NCLB, in school year 2006-2007 most principals focused on multiple strategies in their school improvement efforts. The survey asked principals the extent to which their schools were focusing on ten different strategies in their voluntary school improvement initiatives. The three most common strategies were: (1) using student achievement data to inform instruction and school improvement; (2) providing additional instruction to low- achieving students; and (3) aligning curriculum and instruction with standards and/or assessments. (See fig. 1.) Nearly all school principals placed a major or moderate focus on three or more surveyed strategies in their school improvement efforts, and over 80 percent of principals placed a major or moderate focus on six or more strategies. However, as Education’s report on the survey data cautioned, the number of improvement strategies emphasized was not necessarily an indication of the intensity or quality of the improvement efforts. While nearly all principals responded that they used multiple improvement strategies, there were statistically significant differences in principals’ responses across a range of school characteristics, including percentage of the school’s students receiving free or reduced price lunch (poverty), percentage of minority students, the school’s location, and AYP status. For example, when comparing schools across poverty levels, we found that principals at high-poverty schools were two to three times more likely than principals at low-poverty schools to focus on five particular strategies in their school improvement efforts: Restructuring the school day to teach core content areas in greater depth; Increasing instructional time for all students (e.g., by lengthening the school day or year, shortening recess); Providing extended-time instructional programs (e.g., before-school, after- school, or weekend instructional programs); Implementing strategies for increasing parents’ involvement in their children’s education; and Increasing the intensity, focus, and effectiveness of professional development. Likewise, when comparing schools across minority levels, we found that principals at high- and moderate-minority schools were approximately two to three times more likely than principals at low-minority schools to make six particular school improvement strategies a major or moderate focus of their school improvement efforts. For instance, principals at schools with a high percentage of minority students were more than three times as likely as principals at schools with a low percentage of minority students to provide extended-time instruction such as after-school programs. A school’s location was associated with differences in principals’ responses about the strategies they used as well: principals at rural schools were only about one-third to one-half as likely as central city schools to make five of these school improvement strategies a moderate or major focus of their school improvement efforts. When we compared principal responses based on AYP status, there was some evidence of a statistically significant association between AYP status and the extent to which principals focused these strategies in their school improvement efforts, but it was limited when the other variables such as poverty and minority were taken into account. AYP status had some correlation with the demographic characteristics of poverty and minority, and those characteristics explained the patterns of principals’ responses more fully than the AYP characteristic. However, our analysis generally showed that schools that had not made AYP were more likely to make six of these school improvement strategies a moderate or major focus of their school improvement plan than schools that had made AYP. Additionally, Education reported that schools identified for improvement under NCLBA—that is, schools that have not made AYP for two or more consecutive years—were engaged in a greater number of improvement efforts than non-identified schools. Therefore, principals of the non- identified schools may have been less likely than principals of identified schools to view specific strategies as a major or moderate focus. We spoke with several researchers about the results of our analysis of the principals’ responses, especially at high-poverty and high-minority schools. While the researchers could not say with certainty the reasons for the patterns, they noted that high-poverty and high-minority schools tend to be most at risk of not meeting their states’ standards, so that principals at those schools might be more willing to try different approaches. Conversely, the researchers noted that principals at schools meeting standards would not have the same incentives to adopt as many school improvement strategies. The RAND survey of elementary and middle school math teachers in California, Georgia and Pennsylvania showed that in each of the three states at least half of the teachers reported increasing their use of certain instructional practices in at least five areas as a result of the statewide math test (see fig. 2). For example, most teachers in Pennsylvania responded that due to the state math test they: (1) focused more on standards, (2) emphasized assessment styles and formats, (3) focused more on subjects tested, (4) searched for more effective teaching methods, and (5) spent more time teaching content. As we did with the survey responses of principals, we analyzed the teacher survey data to determine whether math teachers’ responses differed by school characteristics for poverty, minority, location, and AYP status. As with the principals’ responses, we found that elementary and middle school math teachers in high-poverty and high-minority schools were more likely than teachers in low-poverty and low-minority schools to report increasing their use of certain instructional practices, and this pattern was consistent across the three states (see fig. 3). For example, 69 percent of math teachers at high-poverty schools in California indicated they spent more time teaching test-taking strategies as opposed to 38 percent of math teachers in low-poverty schools. In Georgia, 50 percent of math teachers in high-poverty schools reported offering more outside assistance to non- proficient students in contrast to 26 percent of math teachers in low- poverty schools. Fifty-one percent of math teachers at high-poverty schools in Pennsylvania reported focusing more attention on students close to proficiency compared to 23 percent of math teachers doing so in low poverty schools. Similar to what our poverty analysis showed, survey responses provided some evidence that math teachers in high-minority schools were more likely than those in low-minority schools to change their instructional practices. Math teachers at high-minority schools in each of the three states, as compared to those at low-minority schools, were more likely to: rely on open-ended tests in their own classroom assessments; increase the amount of time spent teaching mathematics by replacing non- instructional activities with mathematics instruction; focus on topics emphasized in the state math test; and teach general test-taking strategies. We also analyzed the RAND data with regard to school location and a school’s AYP status, but results from these characteristics were not significant for as many instructional practices. As we did regarding the survey responses of principals, we spoke to several researchers, including the authors of the three-state teacher study, regarding possible reasons for the patterns we saw in the teacher survey data. The researchers we spoke with provided similar possible reasons for the patterns in the teacher survey as they did for patterns in the principal survey. For instance, the researchers noted that high-poverty and high- minority schools are more likely to be at risk of failing to meet the state standards, which might prompt teachers to try different approaches. On the other hand, the researchers stated that teachers at those schools meeting the standards would not have the same incentives to change their instructional practices. Research shows that using a standards-based curriculum that is aligned with corresponding instructional guidelines can positively influence teaching practices. Specifically, some studies reported changes by teachers who facilitated their students developing higher-order thinking skills, such as interpreting meaning, understanding implied reasoning, and developing conceptual knowledge, through practices such as multiple answer problem solving, less lecture and more small group work. Additionally, a few researchers we interviewed stated that a positive effect of NCLBA’s accountability provisions has been a renewed focus on standards and curriculum. However, some studies indicated that teachers’ practices did not always reflect the principles of standards-based instruction and that current accountability policies help contribute to the difficulty in aligning practice with standards. Some research shows that, while teachers may be changing their instructional practices in response to standards-based reform, these changes may not be fully aligned with the principles of the reform. That research also notes that the reliability in implementing standards in the classroom varied in accordance with teachers’ different beliefs in and support for standards-based reform as well as the limitations in their instructional capabilities. For example, one observational study of math teachers showed that, while teachers implemented practices envisioned by standards-based reform, such as getting students to work in small groups or using manipulatives (e.g., cubes or tiles), their approaches did not go far enough in that students were not engaged in conversations about mathematical or scientific concepts and ideas. To overcome these challenges, studies point to the need for teachers to have opportunities to learn, practice, and reflect on instructional practices that incorporate the standards, and then to observe their effects on student learning. However, some researchers have raised concerns that current accountability systems’ focus on test scores and mandated timelines for achieving proficiency levels for students do not give teachers enough time to learn, practice, and reflect on instructional practices and may discourage some teachers from trying ambitious teaching practices envisioned by standards-based reform. Another key element of a standards-based accountability system is assessments, which help measure the extent to which schools are improving student learning through assessing student performance against the standards. Some researchers note that assessments are powerful tools for managing and improving the learning process by providing information for monitoring student progress, making instructional decisions, evaluating student achievement, and evaluating programs. In addition, assessments can also influence instructional content and help teachers use or adjust specific classroom practices. As one synthesis concluded, assessments can influence whether teachers broaden or narrow the curriculum, focus on concepts and problem solving—or emphasize test preparation over subject matter content. In contrast, some of the research and a few experts we interviewed raised concerns about testing formats that do not encourage challenging teaching practices and instructional practices that narrow the curriculum as a result of current assessment practices. For example, depending on the test used, research has shown that teachers may be influenced to use teaching approaches that reflect the skills and knowledge to be tested. Multiple choice tests tend to focus on recognizing facts and information while open-ended formats are more likely to require students to apply critical thinking skills. Conclusions from a literature synthesis conducted by the Department of Education stated that “ teachers respond to assessment formats used, so testing programs must be designed and administered with this influence in mind. Tests that emphasize inquiry, provide extended writing opportunities, and use open-ended response formats or a portfolio approach tend to influence instruction in ways quite different from tests that use closed-ended response formats and which emphasize procedures.” We recently reported that states have most often chosen multiple choice items over other item types of assessments because they are cost effective and can be scored within tight time frames. While multiple choice tests provide cost and time saving benefits to states, the use of multiple choice items make it difficult, if not impossible, to measure highly complex content. Other research has raised concerns that, to avoid potential consequences from low-scoring assessment results under NCLBA, teachers are narrowing the curriculum being taught—sometimes referred to as “teaching to the test”—either by spending more classroom time on tested subjects at the expense of other non-tested subjects, restricting the breadth of content covered to focus only on the content covered by the test, or focusing more time on test-taking strategies than on subject content. Our literature review found some studies that pointed to instructional practices that appear to be effective in raising student achievement. But, in discussing the broader implications of these studies with the experts that we interviewed, many commented that, taken overall, the research is not conclusive about which specific instructional practices improve student learning and achievement. Some researchers stated that this was due to methodological issues in conducting the research. For example, one researcher explained that, while smaller research studies on very specific strategies in reading and math have sometimes shown powerful relationships between the strategy used and positive changes in student achievement, results from meta- analyses of smaller studies have been inconclusive in pointing to similar patterns in the aggregate. A few other researchers stated that the lack of empirical data about how instruction unfolds in the classroom hampers the understanding about what works in raising student performance. A few researchers also noted that conducting research in a way that would yield more conclusive results is difficult. One of the main difficulties, as explained by one researcher, is the number of variables a study may need to examine or control for in order to understand the effectiveness of a particular strategy, especially given the number of interactions these variables could have with each other. One researcher mentioned cost as a challenge when attempting to gather empirical data at the classroom level, stating “teaching takes place in the classroom, but the expense of conducting classroom-specific evaluations is a serious barrier to collecting this type of data.” Finally, even when research supports the efficacy of a strategy, it may not work with different students or under varying conditions. In raising this point, one researcher stated that “educating a child is not like making a car” whereby a production process is developed and can simply be repeated again and again. Each child learns differently, creating a challenge for teachers in determining the instructional practices that will work best for each student. Some of the practices identified by both the studies and a few experts as those with potential for improving student achievement were: Differentiated instruction. In this type of instruction, teaching practices and plans are adjusted to accommodate each student’s skill level for the task at hand. Differentiated instruction requires teachers to be flexible in their teaching approach by adjusting the curriculum and presentation of information for students, thereby providing multiple options for students to take in and process information. As one researcher described it, effective teachers understand the strategies and practices that work for each student and in this way can move all students forward in their learning and achievement. More guiding, less telling. Researchers have identified two general approaches to teaching: didactic and interactive. Didactic instruction relies more on lecturing and demonstrations, asking short answer questions, and assessing whether answers are correct. Interactive instruction focuses more on listening and guiding students, asking questions with more than one correct answer, and giving students choices during learning. As one researcher explained, both teaching approaches are important, but some research has shown that giving students more guidance and less direction helps students become critical and independent thinkers, learn how to work independently, and assess several potential solutions and apply the best one. These kinds of learning processes are important for higher-order thinking. However, implementing “less instruction” techniques requires a high level of skill and creativity on the part of the teacher. Promoting effective discourse. An important corollary to the teacher practice of guiding students versus directing them is effective classroom discussion. Research highlights the importance of developing students’ understanding not only of the basic concepts of a subject, but higher-order thinking and skills as well. To help students achieve understanding, it is necessary to have effective classroom discussion in which students test and revise their ideas, and elaborate on and clarify their thinking. In guiding students to an effective classroom discussion, teachers must ask engaging and challenging questions, be able to get all students to participate, and know when to provide information or allow students to discover it for themselves. Additionally, one synthesis of several experimental studies examining practices in elementary math classrooms identified two instructional approaches that showed positive effects on student learning. The first was cooperative learning in which students work in pairs or small teams and are rewarded based on how well the group learns. The other approach included programs that helped teachers introduce math concepts and improve skills in classroom management, time management, and motivation. This analysis also found that using computer-assisted instruction had moderate to substantial effects on student learning, although this type of instruction was always supplementary to other approaches or programs being used. We found through our literature review and interviews with researchers that the issue of effective instructional practices is intertwined with professional development. To enable all students to achieve the high standards of learning envisioned by standards-based accountability systems, teachers need extensive skills and knowledge in order to use effective teaching practices in the classroom. Given this, professional development is critical to supporting teachers’ learning of new skills and their application. Specifically, the research concludes that professional development will more likely have positive impacts on both teacher learning and student achievement if it: Focuses on a content area with direct links to the curriculum; Challenges teachers intellectually through reflection and critical problem Aligns with goals and standards for student learning; Lasts long enough so that teachers can practice and revise their Occurs collaboratively within a teacher learning community—ongoing teams of teachers that meet regularly for the purposes of learning, joint lesson planning, and problem solving; Involves all the teachers within a school or department; Provides active learning opportunities with direct applications to the Is based on teachers’ input regarding their learning needs. Some researchers have raised concerns about the quality and intensity of professional development currently received by many teachers nationwide. One researcher summarized these issues by stating that professional development training for teachers is often too short, provides no classroom follow up, and models more “telling than guiding” practices. Given the decentralized nature of the U.S. education system, the support and opportunity for professional development services for teachers varies among states and school districts, and there are notable examples of states that have focused resources on various aspects of professional development. Nevertheless, shortcomings in teachers’ professional development experiences overall are especially evident when compared to professional development requirements for teachers in countries whose students perform well on international tests, such as the Trends in International Mathematics and Science Study and the Program for International Student Assessment. For example, one study showed that fewer than 10 percent of U.S. math teachers in school year 2003-04 experienced more than 24 hours of professional development in mathematics content or pedagogy during the year; conversely, teachers in Sweden, Singapore, and the Netherlands are required to complete 100 hours of professional development per year. We provided a copy of our draft report to the Secretary of Education for review and comment. Education’s written comments, which are contained in appendix V, expressed support for the important questions that the report addresses and noted that the American Recovery and Reinvestment Act of 2009 included $250 million to improve assessment and accountability systems. The department specifically stated that the money is for statewide data systems to provide information on individual student outcomes that could help enable schools to strengthen instructional practices and improve student achievement. However, the department raised several issues about the report’s approach. Specifically, the department commented that we (1) did not provide the specific research citations throughout the report for each of our findings or clearly explain how we selected our studies; (2) mixed the opinions of education experts with our findings gleaned from the review of the literature; (3) did not present data on the extent to which test formats had changed or on the relationship between test format and teaching practices when discussing our assessment findings; and (4) did not provide complete information from an Education survey regarding increases and decreases in instructional time. As stated in the beginning of our report, the list of studies we reviewed and used for our findings are contained in appendix IV. We provide a description in appendix I of our criteria, the types of databases searched, the types of studies examined (e.g., experimental and nonexperimental) and the process by which we evaluated them. We relied heavily on two literature syntheses conducted by the Department of Education— Standards in Classroom Practice: Research Synthesis and The Influence of Standards on K-12 Teaching and Student Learning: A Research Synthesis, which are included in the list. These two syntheses covered, in a more comprehensive way than many of the other studies that we reviewed, the breadth of the topics that we were interested in and included numerous research studies in their reviews. Many of the findings in this report about the research are taken from the conclusions reached in these syntheses. However, to make this fact clearer and more prominent, we added this explanation to our abbreviated scope and methodology section on page 5 of the report. Regarding the use of expert opinion, we determined that obtaining the views of experts about the research we were reviewing would be critical to our understanding its broader implications. This was particularly important given the breadth and scope of our objectives. The experts we interviewed, whose names and affiliations are listed in appendix III, are prominent researchers who conduct, review, and reflect on the current research in the field, and whose work is included in some of the studies we reviewed, including the two literature syntheses written by the Department of Education and used by us in this study. We did not consider their opinions “conjecture” but grounded in and informed by their many years of respected work on the topic. We have been clear in the report as to when we are citing expert opinion, the research studies, or both. Regarding the report section discussing the research on assessments, it was our intent to highlight that, according to the research, assessments have both positive and negative influences on classroom teaching practices, not to conclude that NCLBA was the cause of either. Our findings in this section of the report are, in large part, based on conclusions from the department’s syntheses mentioned earlier. For example, The Influence of Standards on K-12 Teaching and Student Learning: A Research Synthesis states “… tests matter—the content covered, the format used, and the application of their results—all influence teacher behavior.” Furthermore, we previously reported that states most often have chosen multiple choice assessments over other types because they can be scored inexpensively and their scores can be released prior to the next school year as required by NCLBA. That report also notes that state officials and alignment experts said that multiple choice assessments have limited the content of what can be tested, stating that highly complex content is “difficult if not impossible to include with multiple choice items.” However, we have revised this paragraph to clarify our point and provide additional information. Concerning the topic of narrowing the curriculum, we agree with the Department of Education that this report should include a fuller description of the data results from the cited Education survey in order to help the reader put the data in an appropriate context. Hence, we have added information to that section of the report. However, one limitation of the survey data we cite is that it covers changes in instructional time for a short time period—from school year 2004-05 to 2006-07. In the its technical comments, the Department refers to its recent report, Title I Implementation: Update on Recent Evaluation Findings for a fuller discussion of this issue. The Title I report, while noting that most elementary teachers reported no change from 2004–05 to 2006–07 in the amount of instructional time that they spent on various subjects, also provides data over a longer, albeit earlier period time period, from 1987–88 to 2003–04, from the National Center on Education Statistics Schools and Staffing Survey. In analyzing this data, the report states that elementary teachers had increased instructional time on reading and mathematics and decreased the amount of time spent on science and social studies during this period. We have added this information as well. Taken together, we believe these data further reinforce our point that assessments under current accountability systems can have, in addition to positive influences on teaching, some negative ones as well, such as the curriculum changes noted in the report, even if the extent of these changes is not fully known. Education also provided technical comments that we incorporated as appropriate. We are sending copies of this report to the Secretary of Education, relevant congressional committees, and other interested parties. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or ashbyc@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. To address the objectives of this study, we used a variety of methods. To determine the types of instructional practices schools and teachers are using to help students achieve state academic standards and whether those practices differ by school characteristics, we used two recent surveys of principals and teachers. The first survey, a nationally- representative survey from the Department of Education’s (Education) National Longitudinal Study of No Child Left Behind (NLS-NCLB) conducted by the RAND Corporation (RAND), asked principals the extent to which their schools were focusing on certain strategies in their voluntary school improvement efforts. Education’s State and Local Implementation of the No Child Left Behind Act Volume III— Accountability Under NCLB: Interim Report included information about the strategies emphasized by principals as a whole, and we obtained from Education the NLS-NCLB database to determine the extent to which principals’ responses differed by school characteristic variables. We conducted this analysis on school year 2006-2007 data by controlling for four school characteristic variables: (1) the percentage of a school’s students receiving free or reduced price lunch (poverty); (2) the percentage of students who are a racial minority (minority); (3) whether the school is in an urban, urban fringe (suburban), or rural area (school location); and (4) the school’s adequate yearly performance (AYP) status. We analyzed data from a second RAND survey, which was a three-state survey sponsored by the National Science Foundation that asked math teachers in California, Georgia, and Pennsylvania how their classroom teaching strategies differed due to a state math test. RAND selected these states to represent a range of approaches to standards-based accountability and to provide some geographic and demographic diversity; the survey data is representative only for those three states individually. RAND’s report on the three-state survey data included information about how teachers within each of the three states had changed their teaching practices due to a state accountability test. RAND provided us with descriptive data tables based on its school year 2005-2006 survey data; we analyzed the data to measure associations between the strategies used and the school characteristic variables. We requested tables that showed this information for teachers in all schools, and separately for teachers in different categories of schools (elementary and middle schools) and by the school characteristics of poverty, minority, school location and AYP status. We obtained from RAND standard error information associated with the estimates from the different types of schools and thus were able to test the statistical significance of differences in likelihood between what teachers from different types of schools reported. As part of our analyses for both surveys, we reviewed documentation and performed electronic testing of the data obtained through the surveys. We also conducted several interviews with several researchers responsible for the data collection and analyses and obtained information about the measures they took to ensure data reliability. On the basis of our efforts to determine the reliability of the data, we determined the data from each of these surveys were sufficiently reliable for the purposes of our study. We reviewed existing literature to determine what researchers have found regarding the effect of standards-based accountability systems on instructional practices, and practices that work in raising student achievement. To identify existing studies, we conducted searches of various databases, such as the Education Resources Information Center, Proquest, Dialog EDUCAT, and Education Abstracts. We also asked all of the education researchers that we interviewed to recommend additional studies. From these sources, we identified 251 studies that were relevant to our study objectives about the effect of standards-based accountability systems on instructional practices and instructional practices there are effective in raising student achievement. We selected them according to the following criteria: covered the years 2001 through 2008 and were either experimental or quasi-experimental studies, literature syntheses, or studied multiple sites. We selected the studies for our review based on their methodological strength, given the limitations of the methods used, and not necessarily on whether the results could be generalized. We performed our searches from August 2008 to January 2009. To assess the methodological quality of the selected studies, we developed a data collection instrument to obtain information systematically about each study being evaluated and about the features of the evaluation methodology. We based our data collection and assessments on generally accepted social science standards. We examined factors related to the use of comparison and control groups; the appropriateness of sampling and data collection methods; and for syntheses, the process and criteria used to identify studies. A senior social scientist with training and experience in evaluation research and methodology read and coded the methodological discussion for each evaluation. A second senior social scientist reviewed each completed data collection instrument and the relevant documentation to verify the accuracy of every coded item. This review identified 20 selected studies that met GAO’s criteria for methodological quality. We supplemented our synthesis by interviewing prominent education researchers identified in frequently cited articles and through discussions with knowledgeable individuals. We also conducted interviews with officials at the U.S. Department of Education, including the Center on Innovation and Improvement, and the Institute on Education Sciences’ National Center for Education Evaluation and Regional Assistance, as well as other educational organizations. We also reviewed relevant federal laws and regulations. In order to analyze the National Longitudinal Study of No Child Left Behind (NLS-NCLB) principal survey conducted by the RAND Corporation, we analyzed strategies on which principals most often focused, taking into account the percentage of a school’s students receiving free or reduced price lunch (poverty), the percentage of students who are a racial minority (minority), whether the school is in an urban, suburban, or rural area (school location), and the school’s adequate yearly performance (AYP) status (see table 1). Our analyses used “odds ratios,” generally defined as the ratio of the odds of an event occurring in one group compared to the odds of it occurring in another group, to express differences in the likelihoods of schools with different characteristics using these strategies. We used odds ratios rather than percentages because they are more appropriate for statistical modeling and multivariate analysis. Odds ratios indicate how much higher (when they are greater than 1.0) or lower (when they are less than 1.0) the odds were that principals would respond that a given strategy was a major or moderate focus. We included a reference category for the school characteristics (low minority, low poverty, and central city) in the top row of table 1, and put comparison groups beneath those reference categories, as indicated by the column heading in the second row (high-minority, high- poverty, or rural schools). As an example, the third cell in the “high- minority schools” column indicates that principals in high-minority schools were 2.65 times more likely to make “implementing new instructional approaches or curricula in reading/language arts/English” a focus of their school improvement efforts. In another example, the odds that principals would “restructure the school day to teach core content areas in greater depth (e.g., establishing a literacy block)” were 2.8 times higher for high-poverty schools than low poverty schools, as seen in the sixth cell under “high-poverty schools.” Those cells with an asterisk indicate statistically significant results; that is, we have a high degree of confidence that the differences we see are not just due to chance but show an actual difference in the survey responses. See appendix I for further explanation of our methodology. “Strong States, Weak Schools: The Benefits and Dilemmas of Centralized Accountability” Quasi-experimental design with matched groups; multiple regressions used with data. Literature review using a best-evidence synthesis (related to a meta-analysis) Cornelia M. Ashby (202) 512-7215 or ashbyc@gao.gov. Janet Mascia (Assistant Director), Bryon Gordon (Assistant Director), and Andrew Nelson (Analyst-in-Charge) managed all aspects of the assignment. Linda Stokes and Caitlin Tobin made significant contributions to this report in all aspects of the work. Kate van Gelder contributed to writing this report, and Ashley McCall contributed to research for the report. Luann Moy, Justin Fisher, Cathy Hurley, Douglas Sloane, and John Smale Jr. provided key technical support, and Doreen Feldman and Sheila R. McCoy provided legal support. Mimi Nguyen developed the graphics for the report.
The federal government has invested billions of dollars to improve student academic performance, and many schools, teachers, and researchers are trying to determine the most effective instructional practices with which to accomplish this. The Conference Report for the Consolidated Appropriations Act for Fiscal Year 2008 directed GAO to study strategies used to prepare students to meet state academic achievement standards. To do this, GAO answered: (1) What types of instructional practices are schools and teachers most frequently using to help students achieve state academic standards, and do those instructional practices differ by school characteristics? (2) What is known about how standards-based accountability systems have affected instructional practices? (3) What is known about instructional practices that are effective in improving student achievement? GAO analyzed data from a 2006-2007 national survey of principals and 2005-2006 survey of teachers in three states, conducted a literature review of the impact of standards-based accountability systems on instructional practices and of practices that are effective in improving student achievement, and interviewed experts. Nationwide, most principals focused on multiple strategies to help students meet academic standards, such as using student data to inform instruction and increasing professional development for teachers, according to our analysis of data from a U.S. Department of Education survey. Many of these strategies were used more often at high-poverty schools--those where 75 percent or more of the students were eligible for the free and reduced-price lunch program--and high-minority schools--those where 75 percent or more of students were identified as part of a minority population, than at lower poverty and minority schools. Likewise, math teachers in California, Georgia, and Pennsylvania increased their use of certain instructional practices in response to their state tests, such as focusing more on topics emphasized on assessments and searching for more effective teaching methods, and teachers at high-poverty and high-minority schools were more likely than teachers at lower-poverty schools and lower-minority schools to have made these changes, according to GAO's analysis of survey data collected by the RAND Corporation. Some researchers suggested that differences exist in the use of these practices because schools with lower poverty or lower minority student populations might generally be meeting accountability requirements and therefore would need to try these strategies less frequently. Research shows that standards-based accountability systems can influence instructional practices in both positive and negative ways. For example, some research notes that using a standards-based curriculum that is aligned with corresponding instructional guidelines can facilitate the development of higher order thinking skills in students. But, in some cases, teacher practices did not always reflect the principles of standards-based instruction, and the difficulties in aligning practice with standards were attributed, in part, to current accountability requirements. Other research noted that assessments can be powerful tools for improving the learning process and evaluating student achievement, but assessments can also have some unintended negative consequences on instruction, including narrowing the curriculum to only material that is tested. Many experts stated that methodological issues constrain knowing more definitively the specific instructional practices that improve student learning and achievement. Nevertheless, some studies and experts pointed to instructional practices that are considered to be effective in raising student achievement, such as differentiated instruction. Professional development for teachers was also highlighted as important for giving teachers the skills and knowledge necessary to implement effective teaching practices.
In November 2006, we reported that since 2001, the amount of national research that has been conducted on the prevalence of domestic violence and sexual assault had been limited, and less research had been conducted on dating violence and stalking. At that time, no single, comprehensive effort existed that provided nationwide statistics on the prevalence of these four categories of crime among men, women, youth, and children. Rather, various national efforts addressed certain subsets of these crime categories among some segments of the population and were not intended to provide comprehensive estimates. For example, HHS’s Centers for Disease Control and Prevention’s (CDC) National Violent Death Reporting System, which collects incident-based data from multiple sources, such as coroner/medical examiner reports, gathered information on violent deaths resulting from domestic violence and sexual assaults, among other crimes. However, it did not gather information on deaths resulting from dating violence or stalking incidents. In our November 2006 report, we noted that designing a single, comprehensive data collection effort to address these four categories of crime among all segments of the population independent of existing efforts would be costly, given the resources required to collect such data. Furthermore, it would be inefficient to duplicate some existing efforts that already collect data for certain aspects of these categories of crime. Specifically, in our November 2006 report, we identified 11 national efforts that had reported data on certain aspects of domestic violence, sexual assault, dating violence, and stalking. However, limited national data were available to estimate prevalence from these 11 efforts because they (1) largely focused on incidence rather than prevalence, (2) used varying definitions for the types of crimes and categories of victims covered, and (3) had varying scopes in terms of incidents and categories they addressed. Focus on incidence. Four of the 11 national data collection efforts focused solely on incidence—the number of separate times a crime is committed against individuals during a specific time period—rather than prevalence—the unique number of individuals who were victimized during a specific time period. As a result, information gaps related to the prevalence of domestic violence, sexual assault, dating violence, and stalking, particularly in the areas of dating violence among victims age 12 and older and stalking among victims under age 18 existed at the time of our November 2006 report. Obtaining both incidence and prevalence data is important for determining which services to provide to the four differing categories of crime victims. HHS also noted that both types of data are important for determining the impact of violence and strategies to prevent it from occurring. Although perfect data may never exist because of the sensitivity of these crimes and the likelihood that not all occurrences will be disclosed, agencies have taken initiatives since our report was issued to help address some of these gaps or have efforts underway. These initiatives are consistent with our recommendation that the Attorney General and Secretary of Health and Human Services determine the extent to which initiatives being planned or underway can be designed or modified to address existing information gaps. For example, DOJ’s Office of Juvenile Justice and Delinquency Prevention (OJJDP), in collaboration with CDC, sponsored a nationwide survey of the incidence and prevalence of children’s (ages 17 and younger) exposure to violence across several major crime categories, including witnessing domestic violence and peer victimization (which includes teen dating violence). OJJDP released incidence and prevalence measures related to children’s exposure to violence, including teen dating violence, in 2009. Thus, Congress, agency decision makers, practitioners, and researchers have more comprehensive information to assist them in making decisions on grants and other issues to help address teen dating violence. To address information gaps related to teen dating violence and stalking victims under the age of 18, in 2010, CDC began efforts on a teen dating violence prevention initiative known as “Dating Matters.” One activity of this initiative is to identify community-level indicators that can be used to measure both teen dating violence and stalking in high-risk urban areas. CDC officials reported that they plan to begin implementing the first phase of “Dating Matters” in as many as four high-risk urban areas in September 2011 and expect that the results from this phase will be completed by 2016. Thus, it is too early to tell the extent to which this effort will fully address the information gap related to prevalence of stalking victims under the age of 18. Varying definitions. The national data collection efforts we reviewed could not provide a basis for combining the results to compute valid and reliable nationwide prevalence estimates because the efforts used varying definitions related to the four categories of crime. For example, CDC’s Youth Risk Behavior Surveillance System’s definition of dating violence included the intentional physical harm inflicted upon a survey respondent by a boyfriend or girlfriend. In contrast, the Victimization of Children and Youth Survey’s definition did not address whether the physical harm was intentional. To address the issue of varying definitions, we recommended that the Attorney General and the Secretary of Health and Human Services, to the extent possible, require the use of common definitions when conducting or providing grants for federal research. This would provide for leveraging individual collection efforts so that the results of such efforts could be readily combined to achieve nationwide prevalence estimates. HHS agreed with this recommendation. In commenting on our November 2006 draft report, DOJ expressed concern regarding the potential costs associated with implementing this and other recommendations we made and suggested that a cost-benefit analysis be conducted. We agreed that performing a cost-benefit analysis is a critical step, as acknowledged by our recommendation that DOJ and HHS incorporate alternatives for addressing information gaps deemed cost-effective in future budget requests. HHS agreed with this recommendation and both HHS and DOJ have taken actions to address it by requesting or providing additional funding for initiatives to address information gaps, such as those on teen dating violence. In response to our recommendation on common definitions, in August 2007, HHS reported that it continued to encourage, but not require, the use of uniform definitions of certain forms of domestic violence and sexual assault it established in 1999 and 2002, respectively. At the same time, DOJ reported that it consistently used uniform definitions of intimate partner violence in project solicitations, statements of work, and published reports. Since then, officials from CDC reported that in October 2010, the center convened a panel of 10 experts to revise and update its definitions of certain forms of domestic violence and sexual assault given advancements in this field of study. CDC is currently reviewing the results from the panel and plans to hold a second panel in 2012, consisting of practitioners, to review the first panel’s results and to obtain consensus on the revised definitions. Moreover, HHS reported that it is also encouraging the use of uniform definitions by implementing the National Intimate Partner and Sexual Violence Survey. This initiative is using consistent definitions and methods to collect information on women and men’s experiences with a range of intimate partner violence, sexual violence, and stalking victimization. Thus, by using consistent methods over time, HHS reported that it will have comparable data at the state and national level to inform intervention and prevention efforts and aid in the evaluation of these efforts. In addition, according to a program specialist from OJJDP, in 2007, OJJDP created common definitions for use in the National Survey of Children’s Exposure to Violence to help collect data and measure incidence and prevalence rates for child victimization, including teen dating violence. While it is too early to tell the extent to which HHS’s efforts will result in the wider use of common definitions to assist in the combination of data collection efforts, OJJDP efforts in developing common definitions have supported efforts to generate national incidence and prevalence rates for child victimization. A program specialist from OJJDP noted that OJJDP plans to focus on continuously improving the definitions. Varying scope. The national data collection efforts we reviewed as part of our November 2006 report also could not provide a basis for combining the results to compute valid and reliable nationwide prevalence estimates because the efforts had varying scopes in terms of the incidents and categories of victims that were included. For example, in November 2006, we reported that CDC’s Youth Risk Behavior Surveillance System excludes youth who are not in grades 9 through 12 and those who do not attend school; whereas the Victimization of Children and Youth Survey was addressed to youth ages 12 and older, or those who were at least in the sixth grade. National data collection efforts underway since our report was issued may help to overcome this challenge. For instance, in September 2010, HHS reported that CDC was working in collaboration with the National Institute of Justice to develop the National Intimate Partner and Sexual Violence Survey. Specifically, HHS reported that, through this system, it is collecting information on women’s and men’s experiences with a range of intimate partner violence, sexual violence, and stalking victimization. HHS reported that it is gathering experiences that occurred across a victim’s lifespan (including experiences that occurred before the age of 18) and plans to generate incidence and prevalence estimates for intimate partner violence, sexual violence, dating violence, and stalking victimization at both the national and state levels. The results are expected to be available in October 2011. These agency initiatives may not fill all information gaps on the extent to which women, men, youth, and children are victims of the four predominant crimes VAWA addresses. However, the efforts provide Congress with additional information it can consider on the prevalence of these crimes as it makes future investment decisions when reauthorizing and funding VAWA moving forward. We reported in July 2007 that recipients of 11 grant programs we reviewed collected and reported data to the respective agencies on the types of services they provide, such as counseling; the total number of victims served; and in some cases, demographic information, such as the age of victims; however, data were not available on the extent to which men, women, youth, and children receive each type of service for all services. This situation occurred primarily because the statutes governing the 11 grant programs do not require the collection of demographic data by type of service, although they do require reports on program effectiveness, including number of persons served and number of persons seeking services who could not be served. Nevertheless, VAWA authorizes that a range of services can be provided to victims, and we determined that services were generally provided to men, women, youth, and children. The agencies administering these 11 grant programs—HHS and DOJ—collect some demographic data for certain services, such as emergency shelter under the Family Violence Prevention and Services Act and supervised visitation and exchange under VAWA. The quantity of information collected and reported varied greatly for the 11 programs and was extensive for some, such as those administered by DOJ’s Office on Violence Against Women (OVW) under VAWA. The federal agencies use this information to help inform Congress about the known results and effectiveness of the grant programs. However, even if demographic data were available by type of service for all services, such data might not be uniform and reliable because, among other factors, (1) the authorizing statutes for these programs have different purposes and (2) recipients of grants administered by HHS and DOJ use varying data collection practices. Authorizing statutes have different purposes. The authorizing statutes for the 11 grant programs we reviewed have different purposes; therefore the reporting requirements for the 11 grant programs must vary to be consistent with these statutes. However, if a grant program addresses a specific service, the demographic data collected are more likely to address the extent to which men, women, youth, and children receive that specific service. For example, in commenting on our July 2007 report, officials from OVW stated that they could provide such demographic data for 3 of its 8 grant programs we reviewed—the Transitional Housing Assistance Grants Program, the Safe Havens: Supervised Visitation and Safe Exchange Grant Program, and the Legal Assistance for Victims Grant Program. Recipients of grants administered by HHS and DOJ use varying data collection practices. For example, some recipients request that victims self-report data on the victim’s race, whereas other recipients rely on visual observation of the victim to obtain these data. Since we issued our July 2007 report, officials from HHS’s Administration for Children and Families (ACF) and OVW told us that they modified their grant recipient forms to improve the quality of the recipient data collected and to reflect statutory changes to the programs and reporting requirements. Moreover, ACF officials stated that they adjusted the demographic categories on their forms to mirror OVW’s efforts so data would be collected consistently across the government for these grant programs. In addition, OVW officials stated that they have continued to provide technical assistance and training to grant recipients on completing their forms through a cooperative agreement with a university. As a result of these efforts, and others, officials from both agencies reported that the quality of the recipient data has improved resulting in fewer errors and more complete data. As we reported in our July 2007 report, HHS and DOJ officials stated that they would face significant challenges in collecting and reporting data on the demographic characteristics of victims receiving services by type of service funded by the 11 grant programs included in our review. These challenges included concerns about victims’ confidentiality and safety, resource constraints, overburdening recipients, and technological issues. For example, according to officials from ACF and OVW, requiring grant recipients to collect this level of detail may inadvertently disclose a victim’s identity, thus jeopardizing the victim’s safety. ACF officials also said that some of their grant recipients do not have the resources to devote to these data collection efforts, since their primary focus is on service delivery. In addition, ACF officials said that being too prescriptive in requiring demographic data could overburden some grant recipients that may report data to multiple funding entities, such as federal, state, and local entities and private foundations. Furthermore, HHS and DOJ reported that some grant recipients do not have sophisticated data collection systems in place to allow them to collect additional information. In our July 2007 report, we did not recommend that federal departments require their grant recipients to collect and report additional data on the demographic characteristics of victims receiving services by type of service because of the potential costs and difficulties associated with addressing the challenges HHS and DOJ officials identified, relative to the benefits that would be derived. In conclusion, there are important issues to consider in moving forward on the reauthorization of VAWA. Having better and more complete data on the prevalence of domestic violence, sexual assault, dating violence, and stalking as well as related services provided to victims of these crimes can without doubt better inform and shape the federal programs intended to meet the needs of these victims. One key challenge in doing this is weighing the relative benefits of obtaining these data with their relative costs because of the sensitive nature of the crimes, those directly affected, and the need for services and support. Chairman Leahy, Ranking Member Grassley, and Members of the Committee, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For questions about this statement, please contact Eileen R. Larence at (202) 512-8777 or larencee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Debra B. Sebastian, Assistant Director; Aditi Archer, Frances Cook, and Lara Miklozek. Key contributors for the previous work that this testimony is based on are listed in each individual report. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses issues related to the reauthorization of the Violence Against Women Act (VAWA). In hearings conducted from 1990 through 1994, Congress noted that violence against women was a problem of national scope and that the majority of crimes associated with domestic violence, sexual assault, and stalking were perpetrated against women. These hearings culminated in the enactment of VAWA in 1994 to address these issues on a national level. VAWA established grant programs within the Departments of Justice (DOJ) and Health and Human Services (HHS) for state, local, and Indian tribal governments and communities. These grants have various purposes, such as providing funding for direct services including emergency shelter, counseling, and legal services for victims of domestic violence, sexual assaults and stalking across all segments of the population. Recipients of funds from these grant programs include, among others, state agencies, tribes, shelters, rape crisis centers, organizations that provide legal services, and hotlines. In 2000, during the reauthorization of VAWA, language was added to the law to provide greater emphasis on dating violence. The 2006 reauthorization of VAWA expanded existing grant programs and added new programs addressing, among other things, young victims. In fiscal year 2011, Congress appropriated approximately $418 million for violence against women programs administered by DOJ and made an additional $133 million available for programs administered by HHS. The 2006 reauthorization of VAWA required us to study and report on data indicating the prevalence of domestic violence, dating violence, sexual assault, and stalking among men, women, youth, and children, as well as services available to the victims. Such data could be used to inform decisions regarding investments in grant programs. In response, we issued two reports in November 2006 and July 2007 on these issues, respectively. This testimony is based on these reports and selected updates we conducted in July 2011 related to actions DOJ and HHS have taken since our prior reviews to improve the quality of recipient data. This testimony, as requested, highlights findings from those reports and discusses the extent to which (1) national data collection efforts report on the prevalence of men, women, youth, and children who are victims of domestic violence, sexual assault, dating violence, and stalking, and (2) the federal government has collected data to track the types of services provided to these categories of victims and any challenges federal departments report that they and their grant recipients face in collecting and reporting demographic characteristics of victims receiving such services by type of service. In November 2006, we reported that since 2001, the amount of national research that has been conducted on the prevalence of domestic violence and sexual assault had been limited, and less research had been conducted on dating violence and stalking. At that time, no single, comprehensive effort existed that provided nationwide statistics on the prevalence of these four categories of crime among men, women, youth, and children. Rather, various national efforts addressed certain subsets of these crime categories among some segments of the population and were not intended to provide comprehensive estimates. For example, HHS's Centers for Disease Control and Prevention's (CDC) National Violent Death Reporting System, which collects incident-based data from multiple sources, such as coroner/medical examiner reports, gathered information on violent deaths resulting from domestic violence and sexual assaults, among other crimes. However, it did not gather information on deaths resulting from dating violence or stalking incidents. We reported in July 2007 that recipients of 11 grant programs we reviewed collected and reported data to the respective agencies on the types of services they provide, such as counseling; the total number of victims served; and in some cases, demographic information, such as the age of victims; however, data were not available on the extent to which men, women, youth, and children receive each type of service for all services. This situation occurred primarily because the statutes governing the 11 grant programs do not require the collection of demographic data by type of service, although they do require reports on program effectiveness, including number of persons served and number of persons seeking services who could not be served. Nevertheless, VAWA authorizes that a range of services can be provided to victims, and we determined that services were generally provided to men, women, youth, and children. The agencies administering these 11 grant programs--HHS and DOJ--collect some demographic data for certain services, such as emergency shelter under the Family Violence Prevention and Services Act and supervised visitation and exchange under VAWA. The quantity of information collected and reported varied greatly for the 11 programs and was extensive for some, such as those administered by DOJ's Office on Violence Against Women (OVW) under VAWA. The federal agencies use this information to help inform Congress about the known results and effectiveness of the grant programs. However, even if demographic data were available by type of service for all services, such data might not be uniform and reliable because, among other factors, (1) the authorizing statutes for these programs have different purposes and (2) recipients of grants administered by HHS and DOJ use varying data collection practices.
The Title I property improvement program was established by the National Housing Act (12 U.S.C. 1703) to encourage lending institutions to finance property improvement projects that would preserve the nation’s existing housing stock. Under the program, FHA insures 90 percent of a lender’s claimable loss on an individual defaulted loan. The total amount of claims that can be paid to a lender is limited to 10 percent of the value of the total program loans held by each lender. Today, the value of Title I’s outstanding loans is relatively small compared with other FHA housing insurance programs. As of September 30, 1997, the value of loans outstanding on the property improvement program totaled about $4.4 billion on 364,423 loans. By contrast, the value of outstanding FHA single-family loans in its Mutual Mortgage Insurance Fund totaled about $360 billion. Similarly, Title I’s share of the owner-occupied, single-family remodeling market is small—estimated by the National Association of Home Builders to be about 1 percent in fiscal year 1997. Approximately 3,700 lenders are approved by FHA to make Title I loans. Lenders are responsible for managing many aspects of the program, including making and servicing loans, monitoring the contractors, and dealing with borrowers’ complaints. In conducting these activities, lenders are responsible for complying with FHA’s underwriting standards and regulations and ensuring that home improvement work is inspected and completed. FHA is responsible for approving lenders, monitoring their operations, and reviewing the claims submitted for defaulted loans. Title I program officials consider lenders to have sole responsibility for program operations and HUD’s role is primarily to oversee lenders and ensure that claims paid on defaulted loans are proper. Homeowners obtain property improvement loans by applying directly to Title I lenders or by having a Title I lender-approved dealer—that is a contractor—prepare a credit application or otherwise assist the homeowner in obtaining the loan from the lender. During fiscal years 1986 through 1996, about 520,000 direct and 383,000 dealer loans were made under the program. By statute, the maximum size of property improvement loans is $25,000 for single-family loans and the maximum loan term is about 20 years. Title I regulations require borrowers to have an income adequate to meet the periodic payments required by a property improvement loan. Most borrowers have low- to moderate incomes, little equity in their homes, and/or poor credit histories. HUD’s expenses under the Title I program, such as claim payments made by FHA on defaulted loans, are financed from three sources of revenue: (1) insurance charges to lenders of 0.5 percent of the original loan amount for each year the loan is outstanding, (2) funds recovered from borrowers who defaulted on loans, and (3) appropriations. In an August 1997 report on the Title I program, Price Waterhouse concluded that the program was underfunded during fiscal years 1990 through 1996. Price Waterhouse estimated that a net funding deficit of about $150 million occurred during the period, with a net funding deficit in 1996 of $11 million. Data from the Price Waterhouse report on estimated projected termination rates for program loans made in fiscal year 1996 can be used to calculate an estimated cumulative claim rate of about 10 percent over the life of Title I loans insured by FHA in that fiscal year. When FHA-approved Title I lenders make program loans, they collect information on borrowers, such as age, income, and gender; the property, such as its address; and loan terms, such as interest rate. While lenders are required to report much of this information to their respective regulatory agencies by the Home Mortgage Disclosure Act, HUD collects little of this information when Title I loans are made. Using information that it requires lenders to provide, HUD records the lender’s and borrower’s names, state and county, as well as the size, term, and purpose of the loan. Other information collected by HUD on other single-family loan insurance programs, such as the borrower’s address, Social Security number, income, and debt are not collected by HUD when Title I loans are made. HUD does collect all of the information available on borrowers, property, and loans when Title I loans default and lenders submit claims. Title I officials told us they collected little information when loans were made because they consider the program to be lender-operated. As a result, HUD cannot identify the characteristics of borrowers and neighborhoods served by the program, nor can it identify certain potential abuses of the program. For example, HUD does not collect borrowers’ Social Security numbers and property addresses when loans are made. Therefore, HUD would have difficulty determining if some borrowers are obtaining multiple Title I loans or if some borrowers are exceeding the maximum amount of Title I loans per property when loans are made. HUD regulations limit the total amount of indebtedness on Title I loans to $25,000 for each single-family property. In this regard, our examination of HUD’s Title I claims data found a number of instances in which the same Social Security number was used for multiple claims. As discussed previously, claims on about 10 percent of the program’s loans can be expected over the life of program loans. Our examination of 16,556 claims paid by HUD between January 1994 and August 1997 revealed 247 instances in which the same Social Security number appeared on multiple claims. These cases totaled about $5.2 million in paid claims. In several instances, claims were paid on as many as five loans having the same Social Security number during the 3-1/2-year period. Our Office of Special Investigations, together with HUD’s Office of the Inspector General, is inquiring further into the circumstances surrounding these loans. However, because these loans may have been for multiple properties, or multiple loans on the same property that totaled less than $25,000, they may not have violated program regulations. Allowing individual borrowers to accumulate large amounts of Title I HUD insured debt, however, exposes HUD to large losses in the case of financial stress on the part of such heavily indebted borrowers. In addition, while information available to HUD allows identification of potential abuses of the $25,000 indebtedness limit after loans have defaulted, control over the indebtedness limitation is not possible for 90 percent of the program’s loans made that do not default because borrowers’ Social Security numbers and property addresses are not collected when the loans are made. While HUD collects more extensive information on program loans when they default, we found problems with the accuracy of some of the information recorded in its claims database. Our random sample of 53 loans on which a claim had been denied and subsequently paid by HUD, found that 7 loans, or 13 percent, had been miscoded as dealer loans when they were direct loans, or direct loans when they were dealer loans. This is important because HUD recently cited high default rates on dealer loans, among other reasons, for proposing regulations to eliminate the dealer loan portion of the program. Considering the miscoding on identifying loans as dealer or direct, we question HUD’s ability to identify default experience by loan type. In addition, HUD’s information on claims denied and subsequently approved was problematic. Although HUD can deny claims for property improvement loans for a number of reasons, HUD did not have a system in place to provide information on why claims are denied or approved for payment following a denial. HUD could not provide us with information on how many claims it denied because of poor underwriting or other program abuses or which lenders had a higher-than-average number of claims denied for specific program violations. In addition, we were unable to determine from HUD’s data system why a denied claim was subsequently paid following an appeal by the lender or waiver by HUD. Such information is important in determining how well lenders are complying with program regulations, whether internal controls need to be strengthened, and which lenders should be targeted for review by HUD’s Office of Quality Assurance. We also found that files for claims that were initially denied by HUD and subsequently paid frequently did not contain the names of program officials who decided the denied claims should be paid and the reasons for their decisions. Of the 53 randomly selected loan claim files we examined, 50 contained no evidence of further review by a HUD official following the initial denial or provided any basis for eventually paying the claim. Unless information on who makes decisions to deny claims and the reasons for the denial and subsequent payments are documented, HUD has no basis for reviewing the reasonableness of those decisions. HUD recently made changes to its claims database system to identify the reasons claims are denied. Program officials agreed that such information is important in determining how well program regulations are being complied with and in targeting lenders for quality assurance reviews. Claims examiners are now required to identify their reasons for denial, including the section of the regulation that was violated. However, the change does not address the problem of missing documentation in the claims file explaining the reasons for paying claims that were previously denied. HUD’s monitoring reviews of Title I lenders to identify compliance problems have declined substantially in recent years. Between fiscal years 1995 and 1997, HUD performed 33 Title I on-site quality assurance reviews of lenders. Most of these reviews (26) were performed in fiscal year 1995. During fiscal years 1996 and 1997, HUD performed five and two on-site lender reviews, respectively. According to HUD officials, prior to fiscal year 1997, HUD had a staff of 23 individuals to monitor the 3,700 lenders approved by FHA to make Title I loans and about 8,000 other FHA approved lenders making loans on other FHA insurance programs. Because of this limited monitoring resource, HUD decided to focus its lender monitoring on major high volume FHA programs, according to these HUD officials. Monitoring priorities have also led to few follow-up reviews by HUD. As a result, it is difficult to determine the impact of the quality assurance reviews that were performed on improving lenders’ compliance. When making Title I loans, lenders are required to ensure that borrowers represent acceptable credit risks, with a reasonable ability to make payments on the loans, and to see that the property improvement work is completed. However, our examination of 53 loan claim files revealed that one or more required documents needed to ensure program compliance were missing from more than half (30) of the files. In 12 cases, the required original loan application, signed by the borrower, was not in the loan file. The original loan application is important because it is used by the claims examiner to review the adequacy of the lender’s underwriting and to ensure that the borrower’s signature and Social Security number matches those on other documents, including the credit report. Furthermore, for 23 of the 53 claim files, we found that required completion certificates, certifying that the property improvement work had been completed, were missing or were signed but not dated by the borrowers. According to program guidelines, claims submitted for payment after defaults have occurred on dealer loans should not be paid unless a signed completion certificate is in the file. We found that completion certificates were missing from the files for 13 dealer loans and were not dated for another 4 dealer loans. Lastly, for 33 loans on which program regulations required that an inspection be conducted by the lender, 18 loan files did not contain the report. We also reviewed the 53 claim files to determine how well lenders were complying with underwriting standards. All documentation supporting the underwriting determination should be retained in the loan file, according to HUD regulations. HUD can deny a lender’s claim if the lender has not followed HUD underwriting standards in making the loan. However, HUD does not examine the quality of a lender’s loan underwriting during the claims process if 12 loan payments were made by the borrower before defaulting on the loan. Since 27 percent of the Title I loans that default do so within the first year, this practice, in effect, exempts the majority of defaulted loans from an examination of the quality of the lenders’ underwriting. Of the 53 loans in our sample, 13 defaulted within 12 months of loan origination and were subject to an underwriting review by HUD. We focused our underwriting examination on these 13 loan claim files. We found that for 4 of the 13 loans, on which HUD eventually paid claims, lenders made questionable underwriting decisions. Title I program regulations require that the credit application and review by the lender must establish that the borrower, is an acceptable credit risk, had 2 years of stable employment, and that his/her income will be adequate to meet the periodic payments required by the loan, as well as the borrower’s other housing expenses and recurring charges. However, for four of these loans, information in the files indicated that the borrowers may not have had sufficient income to qualify for the loan or had poor credit. For example, on one loan, the lender used a pay stub covering the first 2 weeks of March to calculate the borrower’s annual income. The pay stub showed that the borrower’s year-to-date earnings were $6,700 by the middle of March, and this amount was used to calculate that his annual income was $34,000, or about $2,800 per month. However, the pay stub also showed that for the 2-week period in March, the borrower worked a full week with overtime and only earned $725, or about $1,600 per month. The file contained no other documentation, such as income tax returns, W-2 forms, or verification from the employer to support the higher monthly income. Program officials told us that it was acceptable to use one pay stub to calculate monthly income; however, the “yearly earnings to date” figure should not be used because it can at times inflate the actual income earned during a normal pay period. The borrower, with about $1,600 per month in corrected income, still met HUD’s income requirements for the amount of the loan. However, HUD denied the original claim because its underwriting standards had not been followed in that the borrower had poor credit at the time the loan was made. In a letter responding to HUD’s denial of its claim, the lender acknowledged that the borrower had limited credit at the time the loan was made, but pointed out the (mis-calculated) higher income of $2,800 per month to justify making the loan. This reasoning was apparently accepted by HUD as there was no evidence in the claim file that HUD questioned the error in calculating the borrower’s monthly income. The borrower defaulted on the loan after making two payments, and HUD paid a claim of $14,000. Similar problems with lenders’ noncompliance with Title I program regulations have been identified by HUD. As noted previously, between fiscal years 1995 and 1997, HUD performed 33 Title I on-site quality assurance reviews of lenders. Among other things, HUD cited lenders for engaging in poor credit underwriting practices and having loan files with missing inspection reports or inspection reports that were not signed or dated. HUD sent the lenders letters detailing its findings and requested a written response addressing the findings. HUD, however, did not perform follow-up, on-site reviews on 32 lenders to ensure that they had taken corrective actions. For the 33 on-site reviews, nine lenders were referred to HUD’s Mortgagee Review Board for further action. The Board assessed four of these lenders a total of $23,500 in civil penalties. Under its HUD 2020 Management Reform Plan and related efforts, HUD has been making changes to the Title I program operations. HUD has relocated its claims examination unit to the Albany (New York) Financial Operations Center and contracted with Price Waterhouse to develop claims examination guidelines. According to program officials in Albany, the new claims process will be more streamlined and automated and include lenders filing claims electronically. In addition, HUD is consolidating all single-family housing operations from 81 locations across the nation into four Single-Family Homeownership Centers. Each center has established a quality assurance division to (1) monitor lenders, (2) recommend sanctions against lenders and other program participants such as contractors and loan officers, (3) issue limited denials of program participation against program participants, and (4) refer lenders for audits/investigations. However, since HUD’s quality assurance staff will monitor lenders involved in all FHA single-family programs, the impact of this change on improving HUD’s oversight of Title 1 lenders is unclear. Overall, by the end of fiscal year 1998, the quality assurance staff will increase to 76, up from 43 in February 1998. HUD expects that the addition of more quality assurance staff will increase the number of reviews of lenders and allow more comprehensive reviews of lender operations. In closing, Mr. Chairman, our preliminary analysis shows weaknesses in HUD’s management of its Title I property improvement loan insurance program and oversight of program lenders. These weaknesses center on the absence of information needed to manage the program and HUD’s oversight of lenders’ compliance with program regulations. HUD officials attributed these weaknesses to the program’s being lender-operated, limited staff resources, and HUD’s assignment of monitoring priorities. Because of these weaknesses, we are concerned that HUD may have little assurance that the property improvement program is operating efficiently and free of abuse. The challenge faced by HUD in managing and overseeing this program centers on how to obtain the information needed to manage the program and to strengthen the oversight of lenders for this program, which is relatively small compared with other FHA housing insurance programs. Our report will include any recommendations or options we have to offer to strengthen HUD’s management and oversight of the program. Mr. Chairman, this concludes my statement. We would be pleased to respond to any questions that you or Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed certain aspects of the Department of Housing and Urban Development's (HUD) management and oversight of its loan insurance program for home improvements under Title I of the National Housing Act, focusing on: (1) the extent to which the information needed to manage the program was available to HUD; (2) the extent to which HUD was overseeing program lenders; and (3) whether HUD has any ongoing or planned efforts under way to strengthen its management and oversight. GAO noted that: (1) its preliminary analysis shows that HUD is not collecting information needed for managing the program; (2) specifically, GAO found that HUD collects little information when loans are made on program borrowers, properties, and loan terms, such as the borrower's income and the address of the property being improved; (3) moreover, HUD does not maintain information on why it denies loan claims or why it subsequently approves some for payment; (4) HUD also provides limited oversight of lenders' compliance with program regulations, conducting only 2 on-site lender reviews in fiscal year 1997 of the approximately 3,700 program lenders; (5) regarding the need for oversight of lenders' compliance, GAO found that loan claim files submitted by lenders to HUD following loan defaults often do not contain required loan documents, including the original loan applications and certifications signed by the borrower that the property improvement work has been completed; (6) in addition, some claims were paid by HUD even though there were indications that lenders did not comply with required underwriting standards when insuring the loan; (7) as a result of the management and oversight weaknesses GAO has observed, its preliminary work indicates that HUD does not know who the program is serving, if lenders are complying with program regulations, and whether certain potential program abuses are occurring, such as violations of the $25,000 limitation on the amount of Title I loan indebtedness for each property; (8) HUD officials attributed these weaknesses to the program's being lender-operated, limited staff resources, and HUD's assignment of monitoring priorities; (9) under the HUD 2020 Management Reform Plan and related efforts, HUD is making significant changes in all of its single-family housing programs, including the Title I property improvement program; (10) these changes are motivated in part by HUD's goals to downsize the agency and address long-standing agencywide management weaknesses; and (11) GAO is assessing the extent to which these changes may affect the management and oversight weaknesses it identified.
DLA is DOD’s logistics manager for all departmental consumable items and some repair parts. Its primary business function is materiel management: providing supply support to sustain military operations and readiness. In addition, DLA performs five other supply-related business functions: distributing materiel from DLA and service-owned inventories, purchasing fuels for DOD and the U.S. government, storing strategic materiel, marketing surplus DOD materiel for reuse and disposal, and providing numerous information services, such as item cataloging, for DOD and the U.S. government, as well as selected foreign governments. These six business functions are managed by field commands that report to and support the agency’s central command authority. In 2000, DLA refocused its logistics mission from that of a supplier of materiel to a manager of supply chain relationships. To support this transition, the agency developed a strategic plan (known as DLA 21) to reengineer and modernize its operations. Among the goals of DLA 21 are to optimize inventories, improve efficiency, increase effectiveness through organizational redesign, reduce inventories, and modernize business systems. DLA relies on over 650 systems to support warfighters by allowing access to global inventories. Whether it is ensuring that there is enough fuel to service an aircraft fleet, providing sufficient medical supplies to protect and treat military personnel, or supplying ample food rations to our soldiers on the frontlines, information technology plays a key role in ensuring that Defense Department agencies are prepared for their missions. Because of its heavy reliance on IT to accomplish its mission, DLA invests extensively in this area. For fiscal year 2002, DLA’s IT budget is about $654 million. Our recent reviews of DLA’s IT management have identified weaknesses in such important areas as enterprise architecture management, incremental investment management, and software acquisition management. In June 2001, we reported that DLA did not have an enterprise architecture to guide the agency’s investment in its Business Systems Modernization (BSM) project—the agency’s largest IT project. The use of an enterprise architecture, which describes an organization’s mode of operation in useful models, diagrams, and narrative, is required by the OMB guidance that implements the Clinger-Cohen Act of 1996 and is a commercial best practice. Such a “blueprint” can help clarify and optimize the dependencies and relationships among an agency’s business operations and the IT infrastructure and applications supporting them. An effective architecture describes both the environment as it is and the target environment that an organization is aiming for (as well as a plan for the transition from one to the other). We concluded that without this architecture, DLA will be challenged in its efforts to successfully acquire and implement BSM. Further, we reported that DLA was not managing its investment in BSM in an incremental manner, as required by the Clinger-Cohen Act of 1996 and OMB guidance and in accordance with best commercial practices. An incremental approach to investment helps to minimize the risk associated with such large-scale projects as BSM. Accordingly, we recommended that DLA make the development, implementation, and maintenance of an enterprise architecture an agency priority and take steps to incrementally justify and validate its investment in BSM. According to DLA officials, the agency is addressing these issues. In January 2002, we reported a wide disparity in the rigor and discipline of software acquisition processes between two DLA systems. Such inconsistency in processes for acquiring software (the most costly and complex component of systems) can lead to the acquisition of systems that do not meet the information needs of management and staff, do not provide support for necessary programs and operations, and cost more and take longer than expected to complete. We also reported that DLA did not have a software process-improvement program in place to effectively strengthen its corporate software acquisition processes, having eliminated the program in 1998. Without a management-supported software process-improvement program, it is unlikely that DLA can effectively improve its institutional software acquisition capabilities, which in turn means that the agency’s software projects will be at risk of not delivering promised capabilities on time and within budget. Accordingly, we recommended that DLA institute a software process-improvement program and correct the software acquisition process weaknesses that we identified. According to DLA officials, the agency is addressing each of these issues. In May 2000, we issued the Information Technology Investment Management (ITIM) maturity framework, which identifies critical processes for successful IT investment and organizes these processes into an assessment framework comprising five stages of maturity. This framework supports the fundamental requirements of the Clinger-Cohen Act of 1996, which requires IT investment and capital planning processes and performance measurement. Additionally, ITIM can provide a useful roadmap for agencies when they are implementing specific, fundamental IT capital planning and investment management practices. The federal Chief Information Officers Council has favorably reviewed the framework, and it is also being used by a number of executive agencies and organizations for designing related policies and procedures and self-led or contractor-based assessments. ITIM establishes a hierarchical set of five different maturity stages. Each stage builds upon the lower stages and represents increased capabilities toward achieving both stable and effective (and thus mature) IT investment management processes. Except for the first stage—which largely reflects ad hoc, undefined, and undisciplined decision and oversight processes—each maturity stage is composed of critical processes essential to satisfy the requirements of that stage. These critical processes are defined by core elements that include organizational commitment (for example, policies and procedures), prerequisites (for example, resource allocation), and activities (for example, implementing procedures). Each core element is composed of a number of key practices. Key practices are the specific tasks and conditions that must be in place for an organization to effectively implement the necessary critical processes. Figure 1 shows the five ITIM stages and a brief description of each stage. Using ITIM, we assessed the extent to which DLA satisfied the five critical processes in stage 2 of the framework. Based on DLA’s acknowledgment that it had not executed any of the key practices in stage 3, we did not independently assess the agency’s capabilities in this stage or stages 4 and 5. To determine whether DLA had implemented the stage 2 critical processes, we compared relevant DLA policies, procedures, guidance, and documentation associated with investment management activities to the key practices and critical processes in ITIM. We rated the key practices as “executed” based on whether the agency demonstrated (by providing evidence of performance) that it had met the criteria of the key practice. A key practice was rated as “not executed” when we found insufficient evidence of a practice during the review, or when we determined that there were significant weaknesses in DLA’s execution of the key practice. As part of our analysis, we selected four IT projects as case studies to verify application of the critical processes and practices. We selected projects that (1) supported different DLA business areas (such as materiel management), (2) were in different lifecycle phases (for example, requirements definition, design, operations and maintenance), (3) represented different levels of risk (such as low or medium) as designated by the agency, and (4) included at least one investment that required funding approval by a DOD authority outside of DLA (for example, the Office of the Secretary of Defense (OSD)). The four projects are the following: Business Systems Modernization: This system, which supports DLA’s materiel management business area, is in the concept demonstration phase of development. DLA reported that it spent about $136 million on this system in fiscal year 2001, and it has budgeted about $133 million for fiscal year 2002. BSM is intended to modernize DLA’s materiel management business function, replacing two of its standard systems (the Standard Automated Materiel Management System and the Defense Integrated Subsistence Management System). The project is also intended to enable the agency to reengineer its logistics practices to reflect best commercial business practices. For example, in support of DLA’s goal of reducing its role as a provider and manager of materiel and increasing its role as a manager of supply chain relationships, BSM is to help link customers with appropriate suppliers and to incorporate commercial business practices regarding physical distribution and financial management. The agency has classified this project as high risk, and OSD has funding approval authority for this project. Hazardous Materials Information System (HMIS): This system, which supports DLA’s logistics operations function, was implemented in 1978. In fiscal year 2001, DLA reported that it spent about $1 million on this system and budgeted about $2.4 million for fiscal year 2002. In 1999 DLA began a redesign effort to transform HMIS into a Web-based system with a direct interface to the manufacturers and suppliers of hazardous material. The project is in the development stage. It contains data on the chemical composition of materials classified as “hazardous” for the purposes of usage, storage, and transportation. The system is used by Emergency Response Teams whenever a spill or accident occurs involving hazardous materials. The agency classified this project as low risk, and funding approval occurs within DLA. The Defense Reutilization and Marketing Automated Information System (DAISY): This system, which supports DLA’s materiel reuse and disposal mission, is in the operations and maintenance lifecycle phase. The agency reported that it spent approximately $4.4 million on DAISY in fiscal year 2001, and it has budgeted about $7 million for fiscal year 2002. This system is a repository for transactions involving the reutilization, transfer, donation, sale, or ultimate disposal of excess personal property from DOD, federal, and state agencies. The excess property includes spare and repair parts, scrap and recyclable material, precious metals recovery, hazardous material, and hazardous waste disposal. Operated by the Defense Reutilization and Marketing Service, the system is used at 190 locations worldwide. The agency classified this project as low risk, and funding approval occurs within DLA. Standard Automated Materiel Management System (SAMMS): This system, which supports DLA’s materiel management business area, is 30 years old and approaching the end of its useful life. The agency reports that investment in SAMMS (budgeted at approximately $19 million for fiscal year 2002) is directed toward keeping the system operating until its replacement, BSM, becomes fully operational (scheduled for fiscal year 2005). This system provides the Inventory Control Points with information regarding stock levels, as well as with the capabilities required for (1) acquisition and management of wholesale consumable items, (2) direct support for processing requisitions, (3) forecasting of requirements, (4) generation of purchase requests, (5) maintenance of technical data, (6) financial management, (7) identification of items, and (8) asset visibility. The agency has classified the maintenance of SAMMS as a low risk effort, and funding approval occurs within DLA. For these projects, we reviewed project management documentation, such as mission needs statements, project plans, and status reports. We also analyzed charters and meeting minutes for DLA oversight boards, DLA’s draft Automated Information System Emerging Program Life Management (LCM) Review and Milestone Approval Directive and Portfolio Management and Oversight Directives, and DOD’s 5000 series guidance on systems acquisition. In addition, we reviewed documentation related to the agency’s self-assessment of its IT investment operations. To supplement our document reviews, we interviewed senior DLA officials, including the vice director (who sits on the Corporate Board, DLA’s highest level investment decisionmaking body), the chief information officer (CIO), the chief financial officer, and oversight board members. We also interviewed the program managers of our four case study projects, as well as officials responsible for managing the IT investment process and other staff within Information Operations. To determine what actions DLA has taken to improve its IT investment management processes, we interviewed the CIO and officials of the Policy, Plans, and Assessments and the program executive officer (PEO) operations groups within the Information Operations Directorate. These groups are primarily responsible for implementing investment management process improvements. We also reviewed a draft list of IT investment management improvement tasks. We conducted our work at DLA headquarters in Fort Belvoir, Virginia, from June 2001 through January 2002, in accordance with generally accepted government auditing standards. In order to have the capabilities to effectively manage IT investments, an agency should (1) have basic, project-level control and selection practices in place and (2) manage its projects as a portfolio of investments, treating them as an integrated package of competing investment options and pursuing those that best meet the strategic goals, objectives, and mission of the agency. DLA has a majority of the project-level practices in place. However, it is missing several crucial practices, and it is not performing portfolio-based investment management. According to the CIO, the evolving state of its investment management capabilities is the result of agency leadership’s recently viewing IT investment management as an area of management focus and priority. Without having crucial processes and related practices in place, DLA lacks essential management controls over its sizable IT investments. At ITIM stage 2 maturity, an organization has attained repeatable, successful IT project-level investment control processes and basic selection processes. Through these processes, the organization can identify expectation gaps early and take appropriate steps to address them. According to ITIM, critical processes at stage 2 include (1) defining investment board operations, (2) collecting information about existing investments, (3) developing project-level investment control processes, (4) identifying the business needs for each IT project, and (5) developing a basic process for selecting new IT proposals. Table 1 discusses the purpose for each of the stage 2 critical processes. To its credit, DLA has put in place about 75 percent of the key practices associated with stage 2 critical processes. For example, DLA has oversight boards to perform investment management functions, and it has basic project-level control processes to help ensure that IT projects are meeting cost and schedule expectations. However, DLA has not executed several crucial stage 2 investment practices. For example, the business needs for IT projects are not always clearly identified and defined, basic investment selection processes are still being developed, and policies and procedures for project oversight are not documented. Table 2 summarizes the status of DLA’s stage 2 critical processes, showing how many associated key practices the agency has executed. DLA’s actions in each of the critical processes are discussed in the sections that follow. To help ensure executive management accountability for IT capital planning and investment decisions, an organization should establish a governing board or boards responsible for selecting, controlling, and evaluating IT investments. According to ITIM, effective IT investment board operations require, among other things, that (1) board membership have both IT and business knowledge, (2) board members understand the investment board’s policies and procedures and exhibit core competencies in using the agency’s IT investment policies and procedures, (3) the organization’s executives and line managers support and carry out board decisions, (4) the organization create organization-specific process guidance that includes policies and procedures to direct the board’s operations, and (5) the investment board operate according to written policies and procedures. (The full list of key practices is provided in table 3.) DLA has established several oversight boards that perform IT investment management functions. These boards include the following: The DLA Investment Council, which is intended to review, evaluate, and approve new IT and non-IT investments between $100,000 and $1,000,000. The Program Executive Officer Review Board, which is intended to review and approve the implementation of IT investments that are budgeted for over $25 million in all or over $5 million in any one year. The Corporate Board, which is intended to review, evaluate, and approve all IT and non-IT investments over $1 million. DLA is executing four of the six key practices needed for these boards to operate effectively. For example, the membership of these boards integrates both IT and business knowledge. In addition, board members informed us of their understanding of their board’s informal practices. Further, according to IT investment officials, project managers, and agency documentation, the boards have a process for ensuring that their decisions are supported and carried out by organization executives and line managers. This process involves documenting board decisions in meeting minutes, assigning staff to carry out the decisions, and tracking the actions taken on a regular basis until the issues are addressed. Nonetheless, DLA is missing the key ingredient associated with two of the board oversight practices that are needed to operate effectively— organization-specific guidance. This guidance, which serves as official operations documentation, should (1) clearly define the roles of key people within its IT investment process, (2) delineate the significant events and decision points within the processes, (3) identify the external and environmental factors that will influence the processes (that is, legal constraints, the behavior of key subordinate agencies and military customers, and the practices of commercial logistics that DLA is trying to emulate as part of DLA 21); and (4) explain how IT investment-related processes will be coordinated with other organizational plans and processes. DLA does not have guidance that sufficiently addresses these issues. Policies and procedures governing operations are in draft for one board and have not been developed for the two other boards. Without this guidance governing the operations of the investment boards, the agency is at risk of performing key investment decisionmaking activities inconsistently. Such guidance would also provide a degree of transparency that is helpful in both communicating and demonstrating how these decisions are made. Table 3 summarizes the ratings for each key practice and the specific findings supporting the ratings. An IT project inventory provides information to investment decision- makers to help evaluate the impacts and opportunities created by proposed or continuing investments. This inventory (which can take many forms) should, at a minimum, identify the organization’s IT projects (including new and existing systems) and a defined set of relevant investment management information about them (for example, purpose, owner, lifecycle stage, budget cost, physical location, and interfaces with other systems). Information from the IT project inventory can, for example, help identify systems across the organization that provide similar functions and help avoid the commitment of additional funds for redundant systems and processes. It can also help determine more precise development and enhancement costs by informing decisionmakers and other managers of interdependencies among systems and how potential changes in one system can affect the performance of other systems. According to ITIM, effectively managing an IT project inventory requires, among other things, (1) identifying IT projects, collecting relevant information about them, and capturing this information in a repository, (2) assigning responsibility for managing the IT project inventory process to ensure that the inventory meets the needs of the investment management process, (3) developing written policies and procedures for maintaining the IT project inventory, (4) making information from the inventory available to staff and managers throughout the organization so they can use it, for example, to build business cases and to support project selection and control activities, and (5) maintaining the IT project inventory and its information records to contribute to future investment selections and assessments. (The full list of key practices is provided in table 4.) DLA has executed many of the key practices in this critical process. For example, according to DLA’s CIO, IT projects are identified and specific information about them is entered into a central repository called the DLA Profile System (DPS). DPS includes, among other things, project descriptions, key contact information, lifecycle stage, and system interfaces. In addition, the CIO is responsible for managing the IT project identification process to ensure that DPS meets the needs of the investment management process. However, DLA has not defined written policies and procedures for how and when users should add to or update information in the DPS. In addition, DLA is not maintaining DPS records, which would be useful during future project selections and investment evaluations, and for documenting the evolution of a project’s development. Without appropriate policies and procedures in place to describe the objectives and information requirements of the inventory, DPS is not being maximized as an effective tool to assist in the fundamental analysis essential to effective decisionmaking. Table 4 summarizes the ratings for each key practice and the specific findings supporting the ratings. Investment review boards should effectively oversee IT projects throughout all lifecycle phases (concept, design, development, testing, implementation, and operations/maintenance). At stage 2 maturity, investment review boards should review each project’s progress toward predefined cost and schedule expectations, using established criteria and performance measures, and should take corrective actions to address cost and milestone variances. According to ITIM, effective project oversight requires, among other things, (1) having written polices and procedures for project management, (2) developing and maintaining an approved management plan for each IT project, (3) having written policies and procedures for oversight of IT projects, (4) making up-to-date cost and schedule data for each project available to the oversight boards, (5) reviewing each project’s performance by regularly comparing actual cost and schedule data to expectations, (6) ensuring that corrective actions for each under- performing project are documented, agreed to, implemented, and tracked until the desired outcome is achieved, and (7) using information from the IT project inventory. (The complete list of key practices is provided in table 5.) DLA has executed most of the key practices in this area. In particular, DLA relies on the guidance in the Department of Defense 5000 series directives for project management and draft guidance in an Automated Information System (AIS) Emerging Program Life-Cycle Management (LCM) Review and Milestone Approval Directive for specific IT project management. In addition, for each of the four projects we reviewed, a project management plan had been approved, and cost and schedule controls were addressed during project review meetings. Further, based on our review of project documentation and in discussion with project managers, up-to-date cost and schedule project data were provided to the PEO Review Board. This board oversees project performance regularly by comparing actual cost and schedule data to expectations and has a process for ensuring that, for underperforming projects, corrective actions are documented, agreed to, and tracked. Notwithstanding these strengths, DLA has some weaknesses in project oversight. Specifically, although the Corporate Board and the Investment Council have written charters, there are no written policies or procedures that define their role in collectively overseeing IT projects. Without these policies and procedures, project oversight may be inconsistently applied, leading to the risk that performance problems, such as cost overruns and schedule slippages, may not be identified and resolved in a timely manner. In addition, according to representatives from the oversight boards, they do not use information from the IT project inventory to oversee projects because they are more comfortable using more traditional methods of obtaining and using information (that is, informally talking with subject matter experts and relying on experience). The inventory is of value only to the extent that decisionmakers use it. As discussed earlier, while the inventory need not be the only source of information, it should nevertheless serve as a reliable and consistent tool for understanding project and overall portfolio decisions. Table 5 summarizes the ratings for each key practice and the specific findings supporting the ratings. Defining business needs for each IT project helps ensure that projects support the organization’s mission goals and meets users’ needs. This critical process creates the link between the organization’s business objectives and its IT management strategy. According to ITIM, effectively identifying business needs requires, among other things, (1) defining the organization’s business needs or stated mission goals, (2) identifying users for each project who will participate in the project’s development and implementation, (3) training IT staff adequately in identifying business needs, and (4) defining business needs for each project. (The complete list of key practices is provided in table 6.) DLA has executed all but one of the key practices associated with effectively defining business needs for IT projects. For example, DLA’s mission goals are described in DLA’s strategic plan. In addition, according to IT investment management officials, the IT staff is adequately trained in identifying business needs because they generally have prior functional unit experience. In addition, according to DLA directives, IT projects are assigned an Integrated Process Team (IPT) to guide and direct the project through the development lifecycle. The IPTs are composed of IT and functional staff. Moreover, DOD and DLA directives require that business requirements and system users be identified and that users participate in the lifecycle management of the project. According to an IT investment official, each IT project has a users’ group that meets throughout the lifecycle to discuss problems and potential changes related to the system. We verified that this was the case for the four projects we reviewed. While the business needs for three of the four projects we reviewed were clearly identified and defined, DLA has reported that this has not been consistently done for all IT projects. According to IT investment management officials, this inconsistency arose because policies and procedures for developing business needs were not always followed or required. DLA officials have stated that they are developing new guidance to address this problem. However, until this guidance is implemented and enforced, DLA cannot effectively demonstrate that priority mission and business improvement needs are forming the basis for all its IT investment decisions. Table 6 summarizes the ratings for each key practice and the specific findings supporting the ratings. Selecting new IT proposals requires an established and structured process to ensure informed decisionmaking and infuse management accountability. According to ITIM, this critical process requires, among other things, (1) making funding decisions for new IT proposals according to an established process, (2) providing adequate resources for proposal selection activities, (3) using an established proposal selection process, (4) analyzing and ranking new IT proposals according to established selection criteria, including cost and schedule criteria, and (5) designating an official to manage the proposal selection process. (The complete list of key practices is provided in table 7.) DLA has executed some of the key practices for investment proposal selection. For example, DLA executives make funding decisions for IT investments using DOD’s Program Objective Memorandum (POM) process, which is part of DOD’s annual budgeting process. Through this process, proposals for new projects or enhancements to ongoing projects are evaluated by DLA’s IT and financial groups and submitted to OSD through DLA’s Corporate Board with recommendations for funding approval. In addition, according to the CIO, adequate resources have been provided to carry out activities related to the POM process. Nonetheless, DLA has yet to execute some of the critical practices related to this process area. Specifically, DLA acknowledges that the agency is not analyzing and prioritizing new IT proposals according to established selection criteria. Instead, the Corporate Board uses the expertise from the IT organization and its own judgment to analyze and prioritize projects. To its credit, DLA recognizes that it cannot continue to rely solely on the POM process to make sound IT investment selection decisions. Therefore, the agency has been working to establish an IT selection process over the past two budget cycles that is more investment-focused and includes increased involvement from IT Operations staff, necessary information, and established selection criteria. Until DLA implements an effective IT investment selection process that is well established and understood throughout the agency, executives cannot be adequately assured that they are consistently and objectively selecting proposals that best meet the needs and priorities of the agency. Table 7 summarizes the ratings for each key practice and the specific findings supporting the ratings. An IT investment portfolio is an integrated, enterprisewide collection of investments that are assessed and managed collectively based on common criteria. Managing investments within the context of such a portfolio is a conscious, continuous, and proactive approach to expending limited resources on an organization’s competing initiatives in light of the relative benefits expected from these investments. Taking an enterprisewide perspective enables an organization to consider its investments comprehensively so that the collective investments optimally address its mission, strategic goals, and objectives. This portfolio approach also allows an organization to determine priorities and make decisions about which projects to fund based on analyses of the relative organizational value and risks of all projects, including projects that are proposed, under development, and in operation. According to ITIM, stage 3 maturity includes (1) defining portfolio selection criteria, (2) engaging in project-level investment analysis, (3) developing a complete portfolio based on the investment analysis, (4) maintaining oversight over the investment performance of the portfolio, and (5) aligning the authority of IT investment boards. Table 8 describes the purposes for the critical processes in stage 3. According to DLA officials, they are currently focusing on implementing stage 2 processes and have not implemented any of the critical processes in stage 3. Until the agency fully implements both stage 2 and 3 processes, it cannot consider investments in a comprehensive manner and determine whether it has the appropriate mix of IT investments to best meet its mission needs and priorities. DLA recognizes the need to improve its IT investment processes, but it has not yet developed a plan for systematically correcting weaknesses. To properly focus and target IT investment process improvements, an organization should fully identify and assess current process strengths and weaknesses (that is, create an investment management capability baseline) as the first step in developing and implementing an improvement plan. As we have previously reported, this plan should, at a minimum, (1) specify measurable goals, objectives, milestones, and needed resources, and (2) clearly assign responsibility and accountability for accomplishing well-defined tasks. The plan should also be documented and approved by agency leadership. In implementing the plan, it is important that DLA measure and report progress against planned commitments, and that appropriate corrective action be taken to address deviations. DLA does not have such a plan. In March 2001, it attempted to baseline agency IT operations by reviewing its project-level investment management practices using ITIM. This effort identified practice strengths and weaknesses, but DLA considered the assessment to be preliminary (to be followed by a more comprehensive assessment at an unspecified later date) and limited in scope. DLA used the assessment results to establish broad milestones for strengthening its investment management process. The agency did not, however, develop a complete process improvement plan. For example, it did not (1) specify required resources to accomplish the various tasks, (2) clearly assign responsibility and accountability for accomplishing the tasks, (3) obtain support from senior level officials, and (4) establish performance measures to evaluate the effectiveness of the completed tasks. At the same time, the agency has separately begun other initiatives to improve its investment management processes, but these initiatives are not aligned with the established milestones or with each other. The DLA CIO characterizes the agency’s approach to its various process improvement efforts as a necessary progression that includes some inevitable “trial and error” as it moves toward a complete process improvement plan. Without such a plan that allows the agency to systematically prioritize, sequence, and evaluate improvement efforts, DLA jeopardizes its ability to establish a mature investment process that includes selection and control capabilities that result in greater certainty about future IT investment outcomes. Until recently, IT investment management has not been an area of DLA management attention and focus. As a result, DLA currently finds itself without some of the capabilities that it needs to ensure that its mix of IT investments best meets the agency’s mission and business priorities. To its credit, DLA now recognizes the need to strengthen its IT investment management and has taken positive steps to begin doing so. However, several critical IT investment management capabilities need to be enhanced before DLA can have reasonable assurance that it is maximizing the value of its IT investment dollar and minimizing the associated risks. Moreover, DLA does not yet have a process improvement plan that is endorsed and supported by agency leadership. The absence of such a plan limits DLA’s prospects for introducing the management capabilities necessary for making prudent decisions that maximize the benefits and minimize the risks of its IT investment. To strengthen DLA’s investment management capability and address the weaknesses discussed in this report, we recommend that the secretary of defense direct the DLA director to designate the development and implementation of effective IT investment management processes as an agencywide priority. Further, we recommend that the secretary of defense have the DLA director do the following: Develop a plan, within 6 months, for implementing IT investment management process improvements that is based on GAO’s ITIM stage 2 and 3 critical processes. Ensure that the plan specifies measurable goals and time frames, defines a management structure for directing and controlling the improvements, and establishes review milestones. Ensure that the plan focuses first on correcting the weakness in the ITIM stage 2 critical processes, because these processes collectively provide the foundation for building a mature IT investment management process. Specifically: Develop and issue guidance covering the scope and operations of DLA’s investment review boards. Such guidance should include, at a minimum, specific definitions of the roles and responsibilities within the IT investment process; an outline of the significant events and decision points within the processes; an identification of the external and environmental factors that will influence the processes (for example, legal constraints, the behavior of key suppliers or customers, or industry norms), and the manner in which IT investment-related processes will be coordinated with other organization plans and processes. Develop and issue policies and procedures for maintaining DLA’s IT projects inventory for investment management purposes. Finalize and issue policies and procedures (including the use of information from the IT systems and project inventory) for the PEO Review Board’s oversight of IT projects. Develop and issue similar policies and procedures for the other investment boards. Finalize and issue guidance supporting the identification of business needs and implementing management controls to ensure that proposals submitted to DLA for review clearly identify and define business requirements. Develop and issue guidance for the proposal selection process in such a way that the criteria for selection are clearly set forth, including formally assigning responsibility for managing the proposal selection process and establishing management controls to ensure that the proposal selection process is working effectively. Ensure that the plan next focuses on stage 3 critical processes, which are necessary for portfolio management, because along with the stage 2 foundational processes, these processes are necessary for effective management of IT investments. Implement the approved plan and report on progress made against the plan’s goals and time frames to the secretary of defense every 6 months. DOD provided what it termed “official oral comments” from the director for acquisition resources and analysis on a draft of this report. In its comments, DOD concurred with our recommendations and described efforts under way and planned to implement them. However, it recommended that two report captions be changed to more accurately reflect, in DOD’s view, the contents of the report and to eliminate false impressions. Specifically, DOD recommended that we change one caption from “DLA’s Capabilities to Effectively Manage IT Investments Are Limited” to “DLA’s Capabilities to Effectively Manage IT Investments Should Be Improved.” DOD stated that this change is needed to recognize the fact that DLA has completed about 75 percent of the practices associated with stage 2 critical processes. We do not agree. As stated in our report, to effectively manage IT investments an agency should (1) have basic, project-level control and selection practices in place (stage 2 processes) and (2) manage its projects as a portfolio of investments (stage 3 processes). Although DLA has executed most of the key practices associated with stage 2 processes, the agency acknowledges that it has not implemented any of the stage 3 processes. Therefore, our caption as written describes DLA’s IT investment management capabilities appropriately. In addition, DOD recommended that we change the caption “DLA Lacks a Plan to Guide Improvement Efforts” to “DLA Lacks a Published Plan to Guide Improvement Efforts.” DOD stated that this change is needed because DLA has developed some elements of an implementation plan. We do not agree. Our point is that DLA did not have a complete process improvement plan, not that it has yet to publish the plan that it has. As we describe in the report, a complete plan should, at a minimum, (1) be based on a full assessment of process strengths and weaknesses, (2) specify measurable goals, objectives, milestones, and needed resources, (3) clearly assign responsibility and accountability for accomplishing well- defined tasks, and (4) be documented and approved by agency leadership. In contrast, DLA’s planning document was based on a preliminary assessment of only stage 2 critical processes and lacked several of the critical attributes listed above. Moreover, DOD stated in its comments that DLA has not completed a formally documented and prioritized implementation plan to resolve stage 2 and 3 practice weaknesses and has yet to complete the self-assessment and gap analysis necessary to define planned action items. Accordingly, it is clear that DLA has not satisfied the tenets of a complete plan, and thus our caption is accurate as written. DOD provided additional comments that we have incorporated as appropriate in the report. We are sending copies of this report to the chairmen and ranking minority members of the Subcommittee on Defense, Senate Committee on Appropriations; the Subcommittee on Readiness and Management Support, Senate Committee on Armed Services; the Subcommittee on Defense, House Committee on Appropriations; and the Subcommittee on Military Readiness, House Committee on Armed Services. We are also sending copies to the director, Office of Management and Budget; the secretary of defense; the under secretary of defense for acquisition, technology, and logistics; the deputy under secretary of defense for logistics and materiel readiness; and the director, Defense Logistics Agency. Copies will be made available to others upon request. If you have any questions regarding this report, please contact us at (202) 512-3439 and (202) 512-7351, respectively, or by e-mail at hiter@gao.gov and mcclured@gao.gov. An additional GAO contact and staff acknowledgments are listed in appendix II. In addition to the individual named above, key contributors to this report were Barbara Collier, Lester Diamond, Gregory Donnellon, Sabine Paul, and Eric Trout.
The Defense Logistics Agency (DLA) relies extensively on information technology (IT) to carry out its logistics support mission. This report focuses on DLA's processes for making informed IT investment decisions. Because IT investment management has only recently become an area of management focus and commitment at DLA, the agency's ability to effectively manage IT investments is limited. The first step toward establishing effective investment management is putting in place foundational, project-level control and selection processes. The second step toward effective investment management is to continually assess proposed and ongoing projects as an integrated and competing set of investment options. Accomplishing these two steps requires effective development and implementation of a plan, supported by senior management, which defines and prioritizes investment process improvements. Without a well-defined process improvement plan and controls for implementing it, it is unlikely that the agency will establish a mature investment management capability. As a result, GAO continues to question DLA's ability to make informed and prudent investment decisions in IT.
Since 2005, DOD and OPM have made significant progress in reducing delays in making personnel security clearance decisions and met statutory timeliness requirements for DOD’s initial clearances completed in fiscal year 2008. IRTPA currently requires that decisions on at least 80 percent of initial clearances be made within an average of 120 days. In December of 2008, we conducted an analysis to assess whether DOD and OPM were meeting the current timelines requirements in IRTPA and examined the fastest 80 percent of initial clearance decisions for military, DOD civilian, and DOD industry personnel. We found that these clearance decisions were completed within 87 days, on average, and well within IRTPA’s requirements. IRTPA further requires that by December 2009, a plan be implemented in which, to the extent practical, 90 percent of initial clearance decisions are made within 60 days, on average. We also analyzed the executive branch’s 2009 annual report to Congress, which presented an average of the fastest 90 percent of initial clearance decisions in anticipation of IRTPA’s December 2009 requirements. The report stated that the average time for completing the fastest 90 percent of initial clearances for military and DOD civilians in fiscal year 2008 was 124 days. The report also stated that the average time for completing the fastest 90 percent of initial clearances for private industry personnel working on DOD contracts in fiscal year 2008 was 129 days. DOD and OMB officials have noted that the existing clearance process is not likely to allow DOD and other agencies to meet the timeliness requirements that will take effect in December 2009 under IRTPA. IRTPA requires that the executive branch report annually on the progress made during the preceding year toward meeting statutory requirements for security clearances, including timeliness, and also provides broad discretion to the executive branch to report any additional information considered appropriate. Under the timeliness requirements in IRTPA, the executive branch can exclude the slowest clearances and then calculate the average of the remaining clearances. Using this approach and anticipating IRTPA’s requirement that by December 2009, a plan be implemented under which, to the extent practical, 90 percent of initial clearance decisions are made within an average of 60 days, the executive branch’s 2009 report cited as its sole metric for timeliness the average of the fastest 90 percent of initial clearances. We conducted an independent analysis of all initial clearance decisions that DOD made in fiscal year 2008 that more fully reflects the time spent making clearance decisions. Without excluding any portion of the data or taking an average, we analyzed 100 percent of 450,000 initial DOD clearances decisions made in fiscal year 2008 for military, DOD civilian, and DOD industry personnel. Figure 2 shows the full range of time it took DOD and OPM to make clearance decisions in fiscal year 2008. As you can see, our independent analysis of all of the initial clearances revealed that 39 percent of the clearance decisions took more than 120 days to complete. In addition, 11 percent of the initial clearance eligibility decisions took more than 300 days to complete. By limiting its reporting on timeliness to the average of the fastest 90 percent of the initial clearance decisions made in fiscal year 2008 and excluding mention of the slowest clearances, the executive branch did not provide congressional decision makers with visibility over the full range of time it takes to make all initial clearance decisions and the reasons why delays continue to exist. In our recent report, we recommended that the Deputy Director for Management at OMB (who is responsible for submitting the annual report) include comprehensive data on the timeliness of the personnel security clearance process in future versions of the IRTPA-required annual report to Congress. In oral comments in response to our recommendation, OMB concurred, recognized the need for timeliness, and underscored the importance of reporting on the full range of time to complete all initial clearances. We note, Mr. Chairman, that you previously submitted an amendment to expand IRTPA’s provision on reporting on clearance timeliness. While IRTPA contains no requirement for the executive branch to report any information on quality, the act grants the executive branch broad latitude to include any appropriate information in its reports. The executive branch’s 2006 through 2009 IRTPA-required reports to Congress on the clearance process provided congressional decision makers with little information on quality—a measure that could include topics such as the completeness of the clearance documentation of clearance decisions. The 2006 and 2008 reports did not contain any mention of quality, and the 2007 report mentioned a single quality measure—the frequency with which adjudicating agencies returned OPM’s investigative reports because of quality deficiencies. The 2009 report does not contain any data on quality but proposes two measures of investigative report quality and identifies plans to measure adjudicative quality. Specifically, the discussion of these measures is included in the Joint Reform Team’s December 2008 report, Security and Suitability Process Reform, which was included in the executive branch’s 2009 report. We have previously reported that information on timeliness alone does not communicate a complete picture of the clearance process, and we have emphasized the importance of ensuring quality in all phases of the clearance process. For example, we recently estimated that with respect to initial top secret clearances adjudicated in July 2008, documentation was incomplete for most OPM investigative reports and some DOD adjudicative files. We independently estimated that 87 percent of about 3,500 investigative reports that adjudicators used to make clearance decisions were missing required documentation, and the documentation most often missing was employment verification. Incomplete documentation may lead to increases in both the time needed to complete the clearance process and in overall process costs and may reduce the assurance that appropriate safeguards are in place to prevent DOD from granting clearances to untrustworthy individuals. Because the executive branch has not sufficiently addressed quality in its reports, it has missed opportunities to provide congressional decision makers with greater visibility over the clearance process. In our most recent report, we recommended that the Deputy Director for Management at OMB include measures of quality in future versions of the IRTPA-required annual reports. In oral comments, OMB concurred with our recommendation and emphasized the importance of providing Congress more transparency about quality in the clearance process. Initial joint reform efforts partially reflect key practices for organizational transformation that we have identified, such as having committed leadership and a dedicated implementation team, but reports issued by the Joint Reform Team do not provide a strategic framework that contains important elements of successful transformation, including long-term goals with related outcome-focused performance measures to show progress, nor do they identify potential obstacles to progress and possible remedies. Consistent with some of the key practices for organizational transformation, a June 2008 Executive Order established the Suitability and Security Clearance Performance Accountability Council, commonly known as the Performance Accountability Council, as the head of the governmentwide governance structure responsible for achieving clearance reform goals and driving and overseeing the implementation of reform efforts. The Deputy Director for Management at OMB—who was confirmed in June 2009—serves as the Chair of the Council, and the Order also designated the Director of OPM and the Director of National Intelligence as Executive Agents for Suitability and Security, respectively. Membership on the council currently includes senior executive leaders from 11 federal agencies. In addition to high-level leadership of the Performance Accountability Council, the reform effort has benefited from a dedicated, multi-agency implementation team—the Joint Reform Team—to manage the transformation process from the beginning. The Joint Reform Team, while not formally part of the governance structure established by Executive Order 13467, works under the Council to provide progress reports to the President, recommend research priorities, and oversee the development and implementation of an information technology strategy, among other things. In addition to the key practices, the three reports issued by the Joint Reform Team have begun to address essential factors for reforming the security clearance process that we identified in prior work and that are also found in IRTPA. These factors include (1) developing a sound requirements determination process, (2) engaging in governmentwide reciprocity, (3) building quality into every step of the process, (4) consolidating information technology, and (5) identifying and reporting long-term funding requirements. While the personnel security clearance joint reform reports, which we reviewed collectively, begin to address essential factors for reforming the security clearance process, which represents positive steps, the Joint Reform Team’s information technology strategy does not yet define roles and responsibilities for implementing a new automated capability that is intended to be a cross-agency collaborative initiative. GAO’s prior work on key collaboration practices has stressed the importance of defining these roles and responsibilities when initiating cross-agency initiatives. In addition, the Joint Reform Team’s reports do not contain any information on initiatives that will require funding, determine how much they will cost, or identify potential funding sources. Without long-term funding requirements, decision makers in both the executive and legislative branches will lack important information for comparing and prioritizing proposals for reforming the clearance processes. The reform effort’s success will be dependent upon the extent to which the Joint Reform Team is able to fully address these key factors moving forward. Although the high-level leadership and governance structure of the current reform effort distinguish it from previous efforts, it is difficult to gauge progress of reform, or determine if corrective action is needed, because the council, through the Joint Reform Team, has not established a method for evaluating the progress of the reform efforts. Without a strategic framework that fully addresses the long-standing security clearance problems and incorporates key practices for transformation—including the ability to demonstrate progress leading to desired results—the Joint Reform Team is not in a position to demonstrate to decision makers the extent of progress that it is making toward achieving its desired outcomes, and the effort is at risk of losing momentum and not being fully implemented. In our May 2009 report, we recommended that OMB’s Deputy Director of Management, in the capacity as Chair of the Performance Accountability Council, ensure that the appropriate entities—such as the Performance Accountability Council, its subcommittees, or the Joint Reform Team— establish a strategic framework for the joint reform effort to include (1) a mission statement and strategic goals; (2) outcome-focused performance measures to continually evaluate the progress of the reform effort toward meeting its goals and addressing long-standing problems with the security clearance process; (3) a formal, comprehensive communication strategy that includes consistency of message and encourages two-way communication between the Performance Accountability Council and key stakeholders; (4) a clear delineation of roles and responsibilities for the implementation of the information technology strategy among all agencies responsible for developing and implementing components of the information technology strategy; and (5) long-term funding requirements for security clearance reform, including estimates of potential cost savings from the reformed process and provide them to decision makers in Congress and the executive branch. In oral comments on our report, OMB stated that it partially concurred with our recommendation to establish a strategic framework for the joint reform effort. Further, in written agency comments provided to us jointly by DOD and ODNI, they also partially concurred with our recommendation. Additionally, DOD and ODNI commented on the specific elements of the strategic framework that we included as part of our recommendation. For example, in the comments, DOD and ODNI agreed that the reform effort must contain outcome-focused performance measures, but added that these metrics must evolve as the process improvements and new capabilities are developed and implemented because the effort is iterative and in phased development. We continue to believe that outcome-focused performance measures are a critical tool that can be used to guide the reform effort and allow overseers to determine when the reform effort has accomplished it goals and purpose. In addition, DOD and ODNI asserted that considerable work has already been done on information technology for the reform effort, but added that even clearer roles and responsibilities will be identified moving forward. Regarding our finding that, at present, no single database exists in accordance with IRTPA’s requirement that OPM establish an integrated database that tracks investigations and adjudication information, DOD and ODNI stated that the reform effort continues its iterative implementation of improvements to systems that improve access to information that agencies need. DOD and ODNI also acknowledged that more work needs to be done to identify long-term funding requirements. Mr. Chairman, I want to conclude by reiterating that DOD and OPM are meeting current IRTPA timeliness requirements, which means that 80 percent of initial clearance decisions are made within 120 days, on average. This represents significant and noteworthy progress from our finding in 2007, when we reported that industry personnel waited more than 1 year, on average, to receive a top secret clearance. I would also like to emphasize that, although the high-level leadership and governance structure of the current reform effort distinguish it from previous attempts at clearance reform, it is imperative that OMB’s newly appointed Deputy Director for Management continue in the crucial role as chair of the Performance Accountability Council in deciding (1) how to implement the recommendations contained in our most recent reports, (2) what types of actions are necessary for developing a corrective action plan, and (3) how the corrective measures will be implemented. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions you may have at this time. For further information regarding this testimony, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are David E. Moser, Assistant Director; James D. Ashley; Lori Atkinson; Joseph M. Capuano; Sara Cradic; Mae Jones; Shvetal Khanna; James P. Klein; Ron La Due Lake; and Gregory Marchand. DOD Personnel Clearances: Comprehensive Timeliness Reporting, Complete Clearance Documentation, and Quality Measures Are Needed to Further Improve the Clearance Process. GAO-09-400. Washington, D.C.: May 19, 2009. Personnel Security Clearances: An Outcome-Focused Strategy Is Needed to Guide Implementation of the Reformed Clearance Process. GAO-09-488 Washington, D.C.: May 19, 2009. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 22, 2009. DOD Personnel Clearances: Preliminary Observations about Timeliness and Quality. GAO-09-261R. Washington, D.C.: December 19, 2008. Personnel Security Clearance: Preliminary Observations on Joint Reform Efforts to Improve the Governmentwide Clearance Eligibility Process. GAO-08-1050T. Washington, D.C.: July 30, 2008. Personnel Clearances: Key Factors for Reforming the Security Clearance Process. GAO-08-776T. Washington, D.C.: May 22, 2008. Employee Security: Implementation of Identification Cards and DOD’s Personnel Security Clearance Program Need Improvement. GAO-08-551T. Washington, D.C.: April 9, 2008. Personnel Clearances: Key Factors to Consider in Efforts to Reform Security Clearance Processes. GAO-08-352T. Washington, D.C.: February 27, 2008. DOD Personnel Clearances: Improved Annual Reporting Would Enable More Informed Congressional Oversight. GAO-08-350. Washington, D.C.: February 13, 2008. DOD Personnel Clearances: Delays and Inadequate Documentation Found for Industry Personnel. GAO-07-842T. Washington, D.C.: May 17, 2007. DOD Personnel Clearances: Additional OMB Actions Are Needed to Improve the Security Clearance Process. GAO-06-1070. Washington, D.C.: September 28, 2006. DOD Personnel Clearances: Questions and Answers for the Record Following the Second in a Series of Hearings on Fixing the Security Clearance Process. GAO-06-693R. Washington, D.C.: June 14, 2006. DOD Personnel Clearances: New Concerns Slow Processing of Clearances for Industry Personnel. GAO-06-748T. Washington, D.C.: May 17, 2006. DOD Personnel Clearances: Funding Challenges and Other Impediments Slow Clearances for Industry Personnel. GAO-06-747T. Washington, D.C.: May 17, 2006. DOD Personnel Clearances: Government Plan Addresses Some Long- standing Problems with DOD’s Program, but Concerns Remain. GAO-06-233T. Washington, D.C.: November 9, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Due to concerns about long standing delays in the security clearance process, Congress mandated reforms in the Intelligence Reform and Terrorism Prevention Act of 2004 (IRTPA), which requires, among other things, that the executive branch report annually to Congress. Since 2005, the Department of Defense's (DOD) clearance program has been on GAO's high-risk list due to delays and incomplete documentation. The Office of Personnel Management (OPM) conducts much of the government's clearance investigations. In 2007, the Director of National Intelligence and DOD established a Joint Reform Team to coordinate governmentwide improvement efforts for the process. The Office of Management and Budget (OMB) oversees these efforts. Based on two recent GAO reports, this statement addresses (1) progress in reducing delays at DOD, (2) opportunities for improving executive branch reports to Congress and (3) the extent to which joint reform efforts reflect key factors for reform. GAO independently analyzed DOD clearances granted in fiscal year 2008, assessed the executive branch's 2006-2009 reports to Congress, and compared three joint reform reports to key transformation practices. GAO previously recommended that OMB improve the transparency in executive branch reporting and establish a strategic framework. OMB concurred or partially concurred with these recommendations. DOD and OPM have made significant progress in reducing delays in making security clearance decisions and met statutory timeliness requirements for DOD's initial clearances completed in fiscal year 2008. IRTPA currently requires that decisions on at least 80 percent of initial clearances be made within an average of 120 days. In 2008, GAO found that OPM and DOD made initial decisions on these clearances within 87 days, on average. Opportunities exist for the executive branch to improve its annual reports to Congress. For example, the executive branch's 2009 report to Congress did not reflect the full range of time it took to make all initial clearance decisions and has provided little information on quality. Under the current IRTPA requirements, the executive branch can exclude the slowest 20 percent of clearances and then calculate timeliness based on an average of the remaining clearances. GAO analyzed 100 percent of initial clearances granted in 2008 without taking averages or excluding the slowest clearances and found that 39 percent took more than 120 days. The absence of comprehensive reporting limits full visibility over the timeliness of initial clearance decisions. With respect to quality, although IRTPA grants the executive branch latitude in reporting, the 2006-2009 reports provided little information on quality. However, the 2009 report identified quality measures that the executive branch proposes to collect. GAO has stated that timeliness alone does not provide a complete picture of the clearance process. For example, GAO recently estimated that with respect to initial top secret clearances adjudicated in July 2008, documentation was incomplete for most OPM investigative reports. Greater attention to quality could increase instances of reciprocity--an entity's acceptance of another entity's clearances. Initial joint reform efforts reflect key practices for organizational transformation that GAO has identified, such as having committed leadership and a dedicated implementation team, but the Joint Reform Team's reports do not provide a strategic framework that contains important elements of successful transformation, including long-term goals with outcome-focused performance measures, nor do they identify potential obstacles to progress and possible remedies. Further, GAO's prior work and IRTPA identified several factors key to reforming the clearance process. These include (1) engaging in governmentwide reciprocity, (2) consolidating information technology, and (3) identifying and reporting long-term funding requirements. However, the Joint Reform Team's information technology strategy does not yet define roles and responsibilities for implementing a new automated capability which is intended to be a cross-agency collaborative initiative. Also, the joint reform reports do not contain information on funding requirements or identify funding sources. The reform effort's success will depend upon the extent to which the Joint Reform Team is able to fully address these key factors moving forward. Further, it is imperative that OMB's Deputy Director for Management continue in the crucial role as chair of the Performance Accountability Council, which oversees joint reform team efforts.
The SDB program in various forms has been in existence for the past 14 years. While criteria to qualify as an SDB remained essentially the same during this period, a Supreme Court decision in 1995—Adarand v. Pena— resulted in the federal government examining how it implemented “affirmative action” programs, including certain procurement preference programs. Subsequently, the federal government established a program to certify SDBs as eligible for preferences when being considered for federal prime and subcontracting opportunities. The SDB program was established by the National Defense Authorization Act of 1987, and applies to the Department of Defense (DOD), the National Aeronautics and Space Administration (NASA), and the U. S. Coast Guard. The implementing regulations define SDBs as small business concerns that are owned and controlled by socially and economically disadvantaged individuals who have been subjected to racial or ethnic prejudice or cultural bias and who have limited capital and credit opportunities. African, Asian, Hispanic, and Native Americans are presumed by regulation to be socially disadvantaged. An individual who is not a member of a designated group presumed to be socially disadvantaged had to establish individual social disadvantage on the basis of clear and convincing evidence which, according to the SBA's OIG audit report, is a difficult standard to meet. Under this standard, an applicant must produce evidence to show that it is highly probable that the applicant is a socially disadvantaged business concern. The regulations further specify that to qualify as an SDB, a small business concern had to (1) be at least 51 percent owned and controlled by a socially and economically disadvantaged individual or individuals; (2) meet the SBA-established size standard based on the business' primary industry as established by the Standard Industrial Classification (SIC) code; and (3) have principals who have a personal net worth, excluding the value of the business and personal home, less than $750,000. The Federal Acquisition Streamlining Act of 1994 (FASA) expanded the program to all federal agencies. In addition to the governmentwide programs, various other federal laws contain provisions designed to assist SDBs that are applicable to specific executive departments or independent agencies. For example, the Surface Transportation and Uniform Relocation Assistance Act of 1987 required the Department of Transportation to expend not less than 10 percent of federal highway and transit funds with disadvantaged business enterprises. Amendments in 1987 and 1992 to the Airport and Airway Improvement Act of 1982 imposed similar requirements with regard to airport programs. Other statutes contain provisions to encourage contracting with SDBs by various departments and agencies, including the Department of Energy, the Department of State, the Environmental Protection Agency, and the Federal Deposit Insurance Corporation. Prior to the recent changes in the SDB program, small business concerns could self-certify that they were small and disadvantaged. According to an SBA official, unless otherwise challenged by an interested party, the contracting agency accepted the self-representation to be accurate. Between 1987 and the Adarand decision, self-certified SDBs were eligible to receive two main benefits: (1) a 10 percent evaluation preference in competitive DOD acquisitions where that award was based on price and price-related factors and (2) the ability to compete for contracts set-aside for SDBs for certain DOD acquisitions where agency officials believed that there was a reasonable expectation that offers would be received from at least two responsible SDBs. Though FASA extended the authority to implement these benefits to all federal agencies, because of the 1995 Adarand decision and the effort to reform federal affirmative action programs in light of the decision, regulations to implement the authority were delayed. In the 1995 Adarand decision, the Supreme Court held that all federal affirmative action programs that use racial classifications are subject to strict judicial scrutiny. To meet this standard, a program must be shown to meet a compelling governmental interest and must be narrowly tailored to meet that interest. The Court questioned whether the program at issue in the Adarand case, which involved highway contracts at the Department of Transportation, met that test. The Court decision resulted in the federal government's examining all affirmative action programs, including procurement preference programs. One issue that was addressed following the Supreme Court decision was the government's policy that allowed firms to self-certify as SDBs. The Supreme Court decision in Adarand resulted in the administration having to make changes to the SDB program. Tasked by the administration, the Department of Justice (DOJ) conducted a review of affirmative action in federal procurement programs. DOD, one of the largest contracting agencies, was the focus of the initial post-Adarand compliance actions by the federal government. DOJ reviewed the procurement mechanisms used by DOD, including set-asides, direct competitive awards, and price evaluations. On May 23, 1996, DOJ issued a proposed structure to reform affirmative action in federal procurement to ensure compliance with the tests of constitutionality established in the Adarand decision. The DOJ proposal included a 2-year ban on DOD's use of set-aside programs for SDBs and the elimination of the SDB set-asides for civilian agencies, allowing only bidding and evaluation credits. The proposal also included standards by which a firm could apply to be certified as an SDB. This proposal also reduced the burden of proof from “clear and convincing evidence” to a “preponderance of the evidence” standard. This lesser evidentiary standard requires that applicants show that they are more likely than not to meet the criteria for social disadvantage. Because of its experience in certifying 8(a) businesses and resolving protests in connection with both the 8(a) and the previous SDB set-aside programs, SBA was chosen to pilot and administer a centralized program for SDB certification. In August 1998, SBA set up the Office of Small Disadvantage Business Certification and Eligibility to implement the DOJ proposal. According to an SBA official, SBA projected that by October 1999, an estimated 30,000 firms would apply to SBA for certification based, in large part, on the number of firms that self-certified as SDBs under the previous program. Under the SDBC program, small businesses seeking to obtain SDB procurement opportunities must first demonstrate that they meet the eligibility criteria to qualify as an SDB. Effective October 1, 1998, small business concerns must receive certification from SBA that they qualify as an SDB for purposes of receiving a price evaluation adjustment when competing for a prime contract. As of January 1, 1999, monetary incentives became available for prime contractors that met and exceeded their subcontracting goals. Also, effective October 1, 1999, SDBs that waived the price evaluation adjustment and large business prime contractors that used certified SDBs as subcontractors in certain industries were eligible for evaluation credits. While the SDB set-aside program was suspended, price and evaluation credits continued with the following three procurement mechanisms: (1) qualified SDBs are eligible for price evaluation adjustments of up to 10 percent when bidding on federal prime contracts in certain industries, (2) prime contractors may receive evaluation credits for their plans to subcontract with SDBs in major authorized SIC groups, and (3) prime contractors that exceed specified targets for SDB subcontracting in the major authorized SIC groups can receive monetary incentives. Although on September 30, 2000, the initial pilot covering civilian agencies' authority to use price and evaluation credits expired, the administration is seeking a 3-year extension of the program as part of SBA's pending Reauthorization Bill. The DOD authority was extended for another 3 years. During this time, SBA, DOD, and the Department of Commerce are to evaluate the performance of the program and determine whether the program has benefited SDBs and whether the reinstitution of set-asides should be considered. As of August 24, 2000, according to SBA officials, 9,034 small business firms were certified as SDBs. Of these firms, 6,405 were grandfathered into the SDBC program due to their 8(a) status. The remaining 2,629, or 29 percent, were small business firms that applied to the program and were certified by SBA. According to SBA, 5,456 small business firms applied to the program, which was a significantly lower number than the 30,000 applications SBA anticipated. Of the 5,456 applications submitted for certification, SBA returned 1,990 applications as incomplete and denied 241 applications for SDB certification. Applicants withdrew 307 applications for unknown reasons. The remaining 289 applications were in various stages of screening and processing. Of the 9,034 certified SDBs, according to an SBA official, 6,405 firms, or 71 percent, were automatically grandfathered into the SDB program due to their 8(a) certification. Of those firms that were grandfathered, 5,689 firms were 8(a) business development firms, and 716 were firms that recently graduated from the 8(a) program but qualified as an SDB because they still met the ownership and personal wealth criteria, according to an SBA official. The official also reported that, as of August 24, 2000, SBA had certified 2,629 firms as SDBs—1,302 firms were certified in the first year of the program from August 24, 1998, through August 23, 1999; and 1,327 firms were certified from August 24, 1999, through August 24, 2000. Table 1 shows the composition of the SDB certifications. According to SBA officials, 5,456 applications were submitted by small businesses for SDB certification from August 24, 1998, to August 24, 2000. Of the 5,456 applications, 3,377, or 62 percent, were determined to be complete and passed the screening phase of the certification process. According to SBA officials, 1,990 applications, or 36 percent, were determined to be incomplete and subsequently returned to the applicant during this period, and 89, or 2 percent, were “in- screening,” meaning that the application was being reviewed by an analyst for completeness. Table 2 shows the status of the applications submitted to SBA by small business concerns for SDB certification. SBA certified 2,629, or 78 percent, of the 3,377 applications it considered complete from August 24, 1998, through August 24, 2000. SBA denied certification to 241 applicants, or 7 percent, according to SBA officials. The two primary reasons SBA officials gave for denying certification were either that (1) the designated group members exceeded the economic threshold, or that (2) the nondesignated group members did not meet the social disadvantaged standard. As for the remaining 507 complete applications, SBA officials also reported that applicants withdrew 307 applications for unknown reasons, and 200 were “in process,” meaning they were being reviewed to determine whether or not the applicant met the eligibility criteria. Table 3 shows the status of all complete applications submitted to SBA as of August 24, 2000. The number of SDBs that have been certified through the SDBC program is significantly lower than the 30,000 projected by SBA, based on the number of firms that had self-certified as SDBs. Officials from SBA, two federal agencies' Offices of Small and Disadvantaged Business Utilization, the U. S. Chamber of Commerce, the Women's Business Enterprise National Council, the National Minority Supplier Development Council, and the National Small Business United cited four broad factors, which they believed, combine to likely explain the lower-than-anticipated number of SDB certification applications. These factors included, for some firms: (1) confusion about the program's implementation, (2) the administrative and financial burden of applying, (3) questions regarding the benefits of obtaining the SDB certification, and (4) not qualifying as SDBs. SBA officials and officials from other organizations we interviewed agreed that businesses might not have applied for certification due to uncertainty about when or how the SDB certifications would be implemented. Criticisms and lack of buy-in from outside groups on the SDB certification process and changes to the program's implementation dates may have created confusion for some firms, while some others may have adopted a “wait-and-see attitude.” Officials from two of the seven organizations that we talked to said that, when developing the certification process, SBA did not solicit the support of small business advocacy organizations that represent the interests of small business concerns. The two officials also stated that some advocacy groups opposed the structure and criteria used to establish SDB certification as well as the onerous documentation requirements. Consequently, those groups have not encouraged their members to participate in the program because these issues are not resolved. One of the officials also believed that SDB owners were not educated about the process, which might have lead them to not apply for certification. Compounding the problem of conflicting or inadequate information about the certification requirements, according to SBA officials, was the shifting of implementation dates. The implementation date for the requirement that prime contractors use only certified SDBs in meeting their subcontracting goals and receive evaluation credits under the SDB participation program also changed several times. For example, the implementation date for the program was originally January 1, 1999, then changed to July 1999 with a final extension to October 1999. Consequently, according to SBA and some of the advocacy group representatives, SDBs may have delayed applying for certification because of uncertainty as to actual deadlines and, in some cases, may have adopted a wait-and-see attitude regarding program requirements and criteria. Officials interviewed from six of the seven organizations agreed that another key factor explaining the lower-than-anticipated number of applicants was that small business owners view the application process as an administrative burden compared with self-certification. Officials from four of the seven organizations interviewed pointed out that the certification requirement was a financial burden compared with the self- certification process. Previously, firms only had to attest that they qualified as SDBs. To be certified as SDBs, firms have to complete and submit one of several different SDB applications, depending on the type of business to be certified. In addition to the administrative burden, businesses can incur significant expenses under the new certification procedures to ensure that their application package is complete and accurate. For example, businesses can go to a private certifier to help them complete their application, but this service can cost up to several thousand dollars, depending on the services performed. According to one small business advocacy official, this expense can be prohibitive for a number of firms. Adding to the issues of confusion about the program's requirements and administrative burden, according to officials interviewed, is the view held by some small businesses and shared by several SBA officials that there is no real benefit to participating in the program. Officials gave different reasons for this view. Two officials we interviewed, as well as officials from SBA, said that some small businesses believe that they are unlikely to receive federal contracts due to both real and perceived restrictions on agencies' use of price evaluation adjustments and therefore questioned the value of obtaining SDB certification. An SBA official pointed out, for example, that DOD, which accounts for about 67 percent of federal procurement dollars spent, is statutorily barred from using price evaluation adjustments once it exceeds its SDB contracting goal. Alternatively, two officials from other organizations we interviewed said that other firms do not see the benefit to certification because they feel confident that they can receive contracts through open competition regardless of their certification status, particularly those that have established contracting relationships. Consequently, small businesses' view that the certification process is an administrative and financial burden combined with the low value placed on SDB certification are factors that may have discouraged small businesses from applying for certification, according to these officials. Finally, an SBA official we interviewed pointed out that, in some cases, firms that had previously self-certified as SDBs might not currently qualify for SDB status. Although she did not have data that could show how many firms fit in this category, the SBA official believed that, based on her experience, exceeding the personal wealth threshold of $750,000 was one reason for firms to either not qualify or no longer qualify as an SDB. We provided a draft of this report to the Administrator of the Small Business Administration, for her review and comment. On December 19, 2000, we received oral comments from the Associate Administrator, Office of Planning and Liaison (formerly Associate Administrator, Office of Government Contracting and Minority Enterprise Development), and from the Assistant Administrator, Office of Outreach and Marketing (formerly Assistant Administrator, Office of Small Disadvantaged Business Certification and Eligibility). Both officials stated that they generally concurred with the information included in the draft report, however, they provided clarifying technical information that we have included in this report as appropriate. To determine the number of businesses that SBA had certified as socially and economically disadvantaged since the implementation of the SDBC program, we met with and obtained information from SBA and reviewed data contained in the SBA Pro-Net database. In addition, we reviewed the SBA OIG's audit report on the SDB certification program, laws and regulations pertaining to SDBs, and a Supreme Court decision. We did not verify data provided by SBA. For our second objective, to obtain views on the reasons for the lower- than-expected SDB certifications, we interviewed officials from SBA, DOJ's Office of the Assistant Attorney General for Civil Rights, as well as officials from the U. S. Chamber of Commerce, the Women's Business Enterprise National Council, the National Minority Supplier Development Council, and the National Small Business United. Also, we sent letters to 30 representatives from federal agencies' Office of Small Disadvantaged Business Utilization requesting their view on reason for the lower-than- expected SDB certifications. Of the 30 federal agency representatives, we received views from Commerce and DOJ within the time frame specified in our letter, which we have included in this report. We did not validate the factors cited by these organizations for explaining the lower-than-expected certifications, nor was there empirical evidence available to validate or refute these views. Also, we did not evaluate the performance and implementation of the SDB program to achieve the governmentwide goal or its effectiveness in certifying SDBs. We conducted our review in Washington, D.C., from July through September 2000 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report for 30 days. At that time, we will send copies of this report to appropriate congressional committees and interested Members of Congress. We will also send copies to the Honorable Aida Alvarez, Administrator, Small Business Administration; the Administrator, General Services Administration; and the Director, Office of Management and Budget. We will also make copies available to others on request. If you have questions regarding this report, please contact me on (202) 512- 8984. Major contributors to this assignment were Hilary Sullivan, Geraldine Beard, William Woods, and Sylvia Schatz. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system)
The federal government has an annual, governmentwide procurement goal of at least five percent for small disadvantaged businesses (SDB). SDBs are eligible for various price and evaluation benefits when being considered for federal contract awards. SDB firms must have their SDB status certified by the Small Business Administration (SBA). Because of concerns over reports that fewer businesses were receiving SDB certification than expected, GAO examined the SBA certification processes to (1) determine the number of businesses that SBA had certified as socially and economically disadvantaged since the implementation of the Small Disadvantaged Business Certification program and (2) obtain views on reasons for the current difference in the number of SDB certifications from the number that had previously self-certified as SDBs. SBA records show that 9,034 small business firms were certified as SDBs as of August 24, 2000. According to SDB officials, 6,405 of these were automatically certified because of their 8(a) certification. The number of SDBs that have been certified by SBA is significantly lower than the 30,000 projected by SBA based on the number of firms that had self-certified as SDBs. Possible reasons for this discrepancy include (1) company reluctance to participate because of their uncertainty as to when or how the program would be implemented, (2) the perception by businesses that the application process is burdensome, and (3) the belief by some companies that the benefits do not justify the effort.
Holding federal elections in the United States is a massive enterprise, administered primarily at the local level. On federal Election Day, millions of voters across the country visit polling places, which are located in schools, recreation centers, churches, various government buildings, and even private homes. For the 2008 federal election, state and local election officials recruited and trained about 2 million poll workers across the country. Generally, each of the 50 states, the District of Columbia, and U.S. territories also play a role in elections, by establishing election laws and policies for their respective election jurisdictions. While federal elections are generally conducted under state laws and policies, several federal laws apply to voting and some provisions specifically address accessibility issues for voters with disabilities. These federal laws collectively address two issues that are essential to ensuring that voters with disabilities can go to polling places and cast their ballots independently and privately as do nondisabled voters. These two issues are physical access and voting systems that enable people with disabilities to cast a private and independent vote. In 1984, Congress enacted VAEHA, which required political subdivisions responsible for conducting elections to ensure that all polling places for federal elections are accessible to elderly voters and voters with disabilities, with limited exceptions. One such exception occurs when the chief election officer of the state determines that no accessible polling places are available in a political subdivision, and that officer ensures that any elderly voter or voter with a disability assigned to an inaccessible polling place will, upon advance request, either be assigned to an accessible polling place or will be provided with an alternative means to cast a ballot on the day of the election. Under the VAEHA, the definition of “accessible” is determined under guidelines established by the state’s chief election officer, but the law does not specify standards or minimum requirements for those guidelines. Additionally, states are required to make available voting aids for elderly voters and voters with disabilities, including instructions printed in large type at each polling place and information by telecommunications devices for the deaf. Title II of the Americans with Disabilities Act of 1990 (ADA) also contains provisions that help increase the accessibility of voting for individuals with disabilities. Specifically, title II and its implementing regulations require that people with disabilities have access to basic public services, including the right to vote. Although the ADA does not strictly require all polling places to be accessible, public entities must make reasonable modifications in policies, practices, or procedures to avoid discrimination against people with disabilities. Moreover, no person with a disability may, by reason of disability, be excluded from participating in or be denied the benefits of any public program, service, or activity. State and local governments may comply with ADA accessibility requirements in a variety of ways, such as redesigning equipment, reassigning services to accessible buildings or alternative accessible sites, or altering existing facilities or constructing new ones. However, state and local governments are not required to take actions that would threaten the historical significance of a historic property, fundamentally alter the nature of a service, or impose any undue financial and administrative burdens. Moreover, a public entity is not required to make structural changes in existing facilities where other methods are effective in achieving compliance. Title III of the ADA covers commercial facilities and places of public accommodation, such as private schools and privately operated recreational centers that may also be used as polling places. Public accommodations must make reasonable modifications in policies, practices, or procedures to facilitate access for people with disabilities. These facilities are also required to remove physical barriers in existing buildings when it is “readily achievable” to do so, that is, when the removal can be done without much difficulty or expense, given the entity’s resources. When the removal of an architectural barrier cannot be accomplished easily, the entity may take alternative measures to facilitate accessibility. All buildings newly constructed by public accommodations and commercial facilities must be readily accessible, and any alterations to an existing building are required, to the maximum extent feasible, to be readily accessible to people with disabilities, including those who use wheelchairs. The Voting Rights Act of 1965, as amended, provides for voter assistance in the voting room. Specifically, the Voting Rights Act, among other things, authorizes voting assistance for blind, disabled, or illiterate persons. Voters who require assistance to vote by reason of blindness, disability, or the inability to read or write may be given assistance by a person of the voter’s choice, other than the voter’s employer or agent of that employer or officer or agent of the voter’s union. Most recently, Congress passed HAVA, which contains a number of provisions to help increase the accessibility of voting for people with disabilities. In particular, section 301(a) of HAVA outlines minimum standards for voting systems used in federal elections. This section specifically states that the voting system must be accessible for people with disabilities, including nonvisual accessibility for the blind and visually impaired, in a manner that provides the same opportunity for access and participation as is provided for other voters. To satisfy this requirement, each polling place must have at least one direct recording electronic or other voting system equipped for people with disabilities. HAVA established the EAC as an agency with wide-ranging duties to help improve state and local administration of federal elections. Among other things, the EAC is responsible for (1) providing voluntary guidance to states implementing certain HAVA provisions; (2) serving as a national clearinghouse of election-related information and a resource for information with respect to the administration of federal elections; (3) providing for the certification of voting systems; and (4) periodically conducting and making publicly available studies regarding methods of ensuring accessibility of voting, polling places, and voting equipment to all voters, including people with disabilities. The EAC also makes grants for the research and development of new voting equipment and technologies and the improvement of voting systems. Furthermore, HAVA requires the Secretary of HHS to make yearly payments to each eligible state and unit of local government to be used for (1) making polling places accessible for people with disabilities and (2) providing people with disabilities with information on accessible polling places. HAVA vests enforcement authority with the U.S. Attorney General to bring a civil action against any state or jurisdiction as may be necessary to carry out specified uniform and nondiscriminatory election technology and administration requirements under HAVA. These requirements pertain to HAVA voting system standards, provisional voting and voting information, the computerized statewide voter registration list, and voter registration by mail. The Voting Section, within Justice’s Civil Rights Division, is responsible for enforcement of civil provisions of federal voting laws, such as HAVA. The Voting Section’s internal process for initiating HAVA-related matters and cases consists of four phases: initiation, investigation, complaint justification, and litigation. See appendix III for an overview of this internal process. The Disability Rights Section, also within the Civil Rights Division, is primarily responsible for protecting the rights of persons with disabilities under the ADA, which includes ensuring that people with disabilities have access to basic services, such as voting. Providing an accessible voting system encompasses both the voting method and the operation of the system. In terms of the voting method, HAVA specifically identifies direct recording electronic systems to facilitate voting for people with disabilities or other voting systems equipped for people with disabilities. For the most part, these systems are electronic machines or devices equipped with features to assist voters with disabilities. A brief description of these types of systems follows. Direct Recording Electronic (DRE) Devices. DRE devices capture votes electronically (see fig. 1). These devices come in two basic models: push button or touch screen. DRE ballots are marked by a voter pressing a button or touching a screen that highlights the selected candidate’s name or an issue. Voters can change their selections until they select the final “vote” button or screen, which casts their vote. These devices can be equipped with such features as an audio ballot and audio voting instructions for the blind. Ballot Marking Devices. These devices use electronic technology to mark an optical scan ballot at voter direction, interpret the ballot selections, communicate the interpretation for voter verification, and then print a voter-verified ballot. A ballot marking device integrates components such as an optical scanner, printer, touch-screen monitor, and a navigational keypad (see fig. 2). Voters use the device’s accessible interface to record their choices on a paper or digital ballot. For example, voters with visual impairments will use an audio interface as well as a Braille keypad to make a selection. Voters who prefer to vote in an alternate language can also utilize the audio interface. Voters with disabilities can make their selection using a foot-pedal or a sip-and-puff device. Vote-by-Phone. Vote-by-phone systems use electronic technology to mark paper ballots. This system is made up of a standard touch-tone telephone and a printer (see fig. 3). When voters call from a polling place to connect to the system, the ballot is read to the voters who then make choices using the telephone keypad. The system then prints out a paper ballot at either a central location (central print) or a polling site (fax print). Central print ballots are read back to the voter over the telephone for verification, after which the voter can decide to cast the ballot or discard it and revote. Fax print ballots produce a physical ballot at the polling place for the voter to review, verify, and cast in a ballot box. Regarding accessible voting system operation, HAVA specifies that the voting system must be accessible for people with disabilities, in a manner that provides the same opportunity for access and participation as is provided for other voters. The operation of the voting system is the responsibility of local election officials at individual polling places. For the voting system to be accessible, the system should be turned on, equipped with special features such as ear phones, set up to accommodate voters using wheelchairs, and positioned in a way to provide the same level of privacy as is afforded to other voters. Also, poll workers should be knowledgeable of the operation of the voting system to provide assistance, if needed. Alternative Voting Methods As we have previously mentioned, the VAEHA requires that any elderly voter or voter with a disability who is assigned to an inaccessible polling place, upon his or her advance request, must be assigned to an accessible polling place or be provided with an alternative means for casting a ballot on the day of the election. However, states generally regulate absentee voting and other alternative voting method provisions, which provide voters with disabilities with additional voting options. Alternative voting methods may include curbside voting; taking a ballot to a voter’s residence; allowing voters to use another, more accessible polling location either on or before Election Day; voting in person at early voting sites; or removing prerequisites by establishing “no excuse” absentee voting or allowing absentee voting on a permanent basis. Compared to 2000, the proportion of polling places without potential impediments increased and almost all polling places had an accessible voting system. In 2008, based upon our survey of polling places, we estimate that 27.3 percent of polling places had no potential impediments in the path from the parking area to the voting area—up from 16 percent in 2000; 45.3 percent had potential impediments but offered curbside voting; and the remaining 27.4 percent had potential impediments and did not offer curbside voting. All but one polling place we visited had an accessible voting system to facilitate private and independent voting for people with disabilities. However, 46 percent of polling places had an accessible voting system that could pose a challenge to certain voters with disabilities, such as voting stations that were not arranged to accommodate voters using wheelchairs. In 2008, we estimate that 27 percent of polling places had no potential impediments in the path from the parking area to the voting area—up from 16 percent in 2000 (see fig. 4). Potential impediments included a lack of accessible parking and obstacles en route from the parking area to the area to the voting area. voting area. Figure 5 shows some key polling place features that we examined, and appendix IV contains a complete list of potential impediments. These features primarily affect individuals with mobility impairments, in particular voters using wheelchairs. Many of the polling places that had potential impediments offered curbside voting or other accommodations to assist voters who may have had difficulty getting to or making their way through a polling place. For all polling places, we found that 45.3 percent had one or more potential impediments and offered curbside voting, 27.4 percent had potential impediments and did not offer curbside voting, and 27.3 percent had no potential impediments. Some polling places provided assistance to voters by bringing a paper ballot or provisional ballot to a voter in a vehicle. In addition to curbside voting, officials we interviewed at most polling places said they would provide assistance to help people with disabilities vote in the polling place. For example, some polling places had wheelchairs available, if needed. Similar to our findings in 2000, the majority of potential impediments at polling places in 2008 occurred outside of or at the building entrance, although improvements were made in some areas. Fifty percent of polling places had one or more potential impediments in the path from the parking area to the building entrance (see fig. 6). At the same time, the percentage of polling places with potential impediments at the building entrance dropped sharply—from 59 percent in 2000 to 25 percent in 2008. As shown in table 1, the most common potential impediments in 2008 were steep ramps or curb cuts in the parking area, unpaved or poor surfaces in the path from the parking lot or route to the building entrance, and door thresholds exceeding ½ inch in height. Figure 7 shows an example of a polling place with two potential impediments from the parking area to the building entrance. It is important to note that our assessment of polling places in 2000 did not include measurements of ramps or curb cuts in the parking area. With this additional accessibility indicator, we did not see a reduction of potential impediments in the parking area overall. However, polling places made significant gains in providing designated parking for people with disabilities, which decreased from 32 percent with no designated parking in 2000 to only 3 percent in 2008. In comparison to our findings in 2000, the proportion of polling places with multiple potential impediments decreased in 2008. Specifically, polling places with four or more potential impediments decreased significantly—from 29 percent in 2000 to 16 percent in 2008 (see fig. 8). At the same time, the percentage of polling places with one, two, or three with one, two, or three potential impediments stayed about the same as in 2000. potential impediments stayed about the same as in 2000. All but one polling place we examined had at least one accessible voting system—typically, an accessible machine in a voting station—to facilitate private and independent voting for people with disabilities. Accessible voting machines had special features for people with disabilities, such as an audio function to allow voters to listen to ballot choices. According to an election official we interviewed, the accessible voting systems have been significant in helping some voters with disabilities—such as blind voters—vote independently for the first time. The most common type of accessible voting machine was the Automark, followed by the Premier ier Accuvote, iVotronic, and Sequoia, respectively (see fig. 9). Accuvote, iVotronic, and Sequoia, respectively (see fig. 9). To help facilitate the use of accessible machines, polling place officials told us that they received training and would provide assistance to help voters with disabilities operate voting machines or overcome difficulties while voting. Almost all (98 percent) of the 626 polling place officials we interviewed said that some or all of the poll workers working on Election Day received training on how to operate the accessible machine. In addition, polling place officials told us they would provide assistance to help people with disabilities with the voting process. All polling place officials we interviewed said they would explain how to operate the machine, and 79 percent said they would demonstrate how to operate the machine (see table 2). Virtually all polling place officials we interviewed told us they would allow a friend or relative to assist a person with a disability with voting. Although polling places had accessible voting systems, nearly one-half (46 percent) had systems that could pose challenges for people with disabilities to cast a private or independent vote. We assessed four aspects of the accessible voting system that, if not met, could pose a challenge to private or independent voting: (1) voting system is set up and powered on; (2) earphones are available for audio functions; (3) voting system is set up to accommodate people using wheelchairs; and (4) accessible voting system provides the same level of privacy for voters with disabilities as is offered to other voters. Figure 10 shows an accessible voting station for people with disabilities. Overall, 35 percent of polling places did not meet one of these four aspects, 10 percent did not meet two eet two aspects, and 1 percent did not meet three aspects. aspects, and 1 percent did not meet three aspects. The 95-percent confidence interval for polling places with one challenge is 27.6 to 41.8. The 95-percent confidence interval for polling places with two challenges is 5.9 to 15.7. The 95-percent confidence interval for polling places with three challenges is 0.2 to 2.1. As shown in table 3, the feature most commonly not met—at 29 percent of polling places—was an accessible voting machine located in a voting station with the minimum height, width, or depth dimensions to accommodate a voter using a wheelchair. This was followed by 23 percent of polling places that offered people with disabilities less privacy for voting than is provided for other voters. For example, some voting stations were not positioned to prevent other voters from seeing how voters using the accessible machine were marking their ballots. The majority of states have established accessibility requirements and funded improvements to help facilitate accessible voting, and all states reported that they required local jurisdictions to offer alternative voting methods. Forty-three states reported on our survey that they required accessibility standards for polling places in 2008, up from 23 states in 2000. Additionally, most states reported that they used federal HAVA funds to improve the physical accessibility of polling places. Further, all states reported that they required local jurisdictions to offer alternative voting methods, such as absentee voting. To help facilitate voting for people with disabilities, most states have established standards by which to evaluate the accessibility of polling places and have required inspections of polling places to help ensure accessibility. The number of states with requirements specifying polling place accessibility standards grew from 23 states in 2000 to 43 states in 2008 (see fig. 11). These standards can vary in terms of specificity of requirements and which aspects of accessibility they address. For example, California established requirements for ramps and entrances, among other things. By comparison, Indiana required that the voting area must have adequate maneuvering space for voters who use wheelchairs or other mobility aids and must allow space for a person who uses a wheelchair to navigate behind and around the accessible machine. Figure 12 is an example of state guidance for setting up the voting room and for placement of the accessible voting system. The number of states that required accommodation of wheelchairs in the voting area has more than doubled—increasing from 17 in 2000 to 38 states in 2008. In addition to specifying standards, since 2000, more states have required polling places to be inspected and local jurisdictions to submit inspection reports to the state to help ensure the accessibility of polling places. Like the accessibility standards, these practices can also vary from state to state. For example, according to its Election Procedures Manual, Arizona requires counties to inspect polling places before each election or to have provisions that counties be contacted if a polling place is altered prior to an election. In contrast, Wisconsin recently revised its accessibility survey and requires all local jurisdictions to conduct their inspections on a primary Election Day so that state and local officials can evaluate the accessibility of polling places during an election. Most states reported using HAVA funds or a combination of HAVA and state funds to support a variety of activities designed to facilitate voting for people with disabilities. In our report on the 2000 election, we found limited funding was one of the main barriers that most state officials faced in improving voting accessibility, especially in providing accessible voting systems and, in some cases, making temporary or permanent modifications to polling places to make them accessible. However, with the availability of HAVA funding since that time, most state officials reported on our survey that they used HAVA funds or a combination of HAVA and state funds to help improve accessibility in these areas. The majority of states (45) reported spending or obligating HAVA funds and, in some cases, also using state funds to enhance physical access to polling places. For example, election officials in Nebraska reported spending HAVA funds to evaluate the accessibility of polling places throughout the state and to ensure they were compliant with ADA standards. Furthermore, 39 states reported obligating or spending HAVA funds or a combination of HAVA and state funds to improve voting systems and technology. For example, Minnesota used HAVA funds to buy ballot- marking machines so that voters with disabilities could mark regular paper ballots privately and independently and to develop instructional videos on how to use the machines. Even though states have taken actions to make the voting process more accessible, many states reported that it was very or moderately challenging to implement certain aspects of HAVA’s voting access requirements. According to our state survey, 31 states reported that ensuring polling place accessibility was very or moderately challenging. (See table 4.) For example, one area in California reported that it was challenging to find enough accessible polling places in some rural communities because limited accessible buildings are available. Additionally, 24 states reported that it was very or moderately challenging to purchase DREs or other accessible voting systems. For example, several states said that it was difficult to buy accessible systems because of EAC’s delay in certifying voting systems. In addition to efforts to ensure polling place accessibility, most states offered alternative voting methods, such as absentee voting, that could help facilitate voting options for people with disabilities. All states offered absentee voting as an option, although 26 states reported on our survey that they required voters to meet at least one of several reasons—typically referred to as an “excuse”—to be eligible to vote via absentee ballot, such as having a disability, being elderly, or being absent from the jurisdiction (see table 5). However, the number of states that allow absentee voting without requiring that voters provide a reason has increased slightly since the 2000 election, from 18 states to 24 states in 2008. Of the 43 states that reported requiring local jurisdictions to offer in-person absentee voting, 40 states required that locations used for in-person absentee voting abide by the same accessibility provisions and accommodations as Election Day polling places. In addition to absentee voting, all 23 states that reported that they required or allowed local jurisdictions to offer early voting also required early voting locations to meet the same HAVA and state accessibility requirements as Election Day polling places. Some states required polling places to provide other accommodations for voters with disabilities, such as curbside voting and audio or visual aids, although fewer states required some of these accommodations in 2008 than in 2000. According to our state survey, the number of states that required curbside voting decreased from 28 states in 2000 to 23 states in 2008 (see fig. 13). Likewise, the number of states that required staff in local jurisdictions to take a ballot to the residence of a voter with a disability who needed assistance on or before Election Day decreased from 21 states in 2000 to only 9 states in 2008. These practices may have declined because more states have taken actions to make polling places accessible since the 2000 election, and more states reported allowing people to vote absentee without having to meet specific criteria. See appendix V for a comparison of state requirements, accommodations, and voting alternatives from our 2000, 2004, and 2008 surveys. Justice provided guidance on polling place accessibility and conducted an initial assessment of states’ compliance with HAVA’s January 2006 deadline for accessible voting systems. Since then, Justice’s oversight of HAVA’s access requirements is part of two other enforcement efforts, but gaps remain. Justice currently conducts polling place observations for federal elections that identify whether an accessible voting system is in place, but it does not systematically assess the physical accessibility of polling places or the level of privacy and independence provided to voters with disabilities. Justice also conducts a small number of annual community assessments of ADA compliance of public buildings, which includes buildings designated as polling places. However, these assessments do not provide a national perspective on polling place accessibility or assess any special features of voting areas and accessible voting systems that are set up only on Election Day. From shortly after the passage of HAVA until 2006, Justice officials said they conducted educational outreach on HAVA voting system requirements. Justice provided guidance on the new HAVA voting system requirements, while the EAC, which was authorized by HAVA to develop guidance and serve as a clearinghouse for election information, was being formed. During this time, Justice officials said they made a considerable effort to educate state and local election officials and national organizations representing election officials and people with disabilities on HAVA voting system requirements. For this effort, Justice officials met with state and local election officials across the country and gave presentations on HAVA requirements at National Association of Secretaries of State and National Association of State Election Directors meetings. In addition, Justice provided information about HAVA voting system requirements on its Web site and posted answers to frequently asked questions. Justice also provided informal responses to questions from state election officials on specific aspects of HAVA voting system requirements. In one response, Justice stated that a HAVA-compliant voting system requires both the voting system and polling place to be accessible to people with disabilities. Furthermore, the EAC, in consultation with Justice, developed an advisory opinion stating that a HAVA-compliant voting system should be accessible to people with disabilities (as defined by the ADA), which includes not just the technical features of the voting system, but configuring the system to allow people with disabilities to vote privately and independently. As part of these early efforts, Justice provided guidance to poll workers on how to assess and create a physically accessible polling place. In 2004, Justice published the Americans with Disabilities Act: ADA Checklist for Polling Places, which provided information to voting officials on key accessibility features needed by most voters with disabilities to go from the parking area to the voting area. The checklist also describes how to take measurements of sloped surfaces, door openings, ramps, and other features to help identify potential impediments and suggest possible alternatives and temporary modifications. Justice officials said they have distributed 16,000 copies of the Americans with Disabilities Act: ADA Checklist for Polling Places, primarily to advocacy groups and state and local election officials, and received over 80,000 hits on its Web site since the checklist was released in February 2004. According to our survey, 34 states found the checklist to be moderately to very helpful and several state election officials with whom we spoke said they used it to develop their own state assessments of polling place accessibility. While the checklist provides limited guidance on accessibility features within the voting area, it does not provide information about the configuration of the voting system—such as positioning the voting system in such a way as to allow a person using a wheelchair to vote privately and independently. In 2005, the EAC adopted Voluntary Voting System Guidelines, which include accessibility standards that specify the configuration of the voting station to accommodate people using a wheelchair. The main purpose of these guidelines is to develop technical specifications and standards for voting systems for national testing and certification. HAVA does not require adoption of the guidelines at the state level, although states may choose to adopt the guidelines and make them mandatory in their jurisdictions. While these guidelines are used to specify voting system testing standards, EAC officials told us that user-friendly guidance targeted to poll workers on HAVA voting system requirements, polling place accessibility, and voting assistance to people with disabilities is needed. In addition to early guidance, Justice also conducted an initial assessment of states’ progress toward meeting the January 2006 deadline for compliance with HAVA voting system requirements. In 2003, Justice sent letters to state election officials summarizing HAVA voting system requirements. Justice followed up with letters in 2005 and 2006, which outlined HAVA voting system requirements and asked states to respond to a series of questions to help gauge whether every polling place in the state had at least one accessible voting machine and whether poll workers were trained in the machine’s operation. Although states were not required to submit reports to Justice under HAVA, Justice officials said all states responded to the department’s letters. Justice officials reviewed state responses and followed up with state officials, sometimes on a weekly basis, if they were not satisfied with the progress being made. Justice also monitored local media outlets and state election and procurement Web sites and consulted with national disability groups, election organizations, and local advocacy groups to independently verify information provided by states. If Justice determined that sufficient progress toward HAVA voting system compliance was not being made, it initiated investigations and, in two cases, pursued litigation when all other options were exhausted. Justice filed complaints against New York and Maine in 2006, in part because these states had not made sufficient progress in purchasing and implementing HAVA accessible voting systems. Since then, according to Justice, both Maine and New York acquired and implemented HAVA accessible voting systems for the November 2008 federal election. Justice officials told us that their assessment of HAVA voting system requirements was part of an initial effort to ensure that all states had accessible voting systems by the required January 1, 2006, deadline. Once the 2006 deadline passed and all states reported having accessible voting systems, Justice continued only limited oversight of HAVA voting system requirements and polling place accessibility, as part of two ongoing enforcement efforts. These limited efforts leave gaps in ensuring voting accessibility for people with disabilities. For example, Justice supervises polling place observations for federal elections on Election Day to primarily assess compliance with the Voting Rights Act of 1965; however, some limited observations on other federal voting statues, such as HAVA, are also included. Specifically, polling place observers look for accessible voting systems and assess whether poll workers are trained in their operation. In calendar year 2008, 1,060 federal observers and 344 Justice staff members observed 114 elections in over 75 jurisdictions covering 24 states. For such efforts, Justice officials select polling places where they believe there may be a problem, on the basis of negative news coverage, complaints received, or information provided by election officials. Information from polling place observations can provide evidence for an ongoing investigation or lawsuit. Justice sometimes initiates investigations on the basis of complaints and other information received. In some cases, the information may also be used to initiate a matter if an investigation has not already been opened. Justice officials told us that, as part of their Election Day 2008 observations, they came across some polling places where accessible voting machines were not turned on or poll workers were unable to operate the accessible machine. However, based on our Election Day assessments, the potential impediments and challenges for voters with disabilities to access and cast a ballot on accessible voting systems may be more common than what Justice officials said they found through their observations. Importantly, Justice did not systematically assess the physical accessibility of the polling places or the level of privacy and independence provided to people with disabilities by the accessible voting system, which limits the department’s ability to identify potential accessibility issues facing voters with disabilities. In addition, Justice officials said they annually initiate a small number of community assessments of ADA compliance in public buildings, including buildings designated as polling places, but these assessments include a small portion of polling places nationwide and are generally not conducted on Election Day. According to Justice, these assessments—called Civic Access assessments—can be resource-intensive, which, in part, may limit the number that the department can complete in a given year. Justice initiated three Civic Access assessments in calendar year 2008. Justice selects communities for Civic Access assessments on the basis of a number of characteristics within a community, including size of the disability community, geographic location, complaints received from citizens and advocacy groups, and proximity to a university or tourist attraction—which, according to Justice officials, might attract people with disabilities from outside of the community. In planning for the assessment, Justice requests information from the communities about their polling places, such as their locations, modifications made on election days, and steps taken to make polling places accessible. The on-site reviews assess as many polling places as possible within the scope of the overall review. Justice officials said they prioritize polling places for assessments on the basis of geographic location, proximity to other buildings targeted for assessment in the review, and extent of public use of the facility for any purpose. To conduct on-site reviews—which typically take 1 to 3 weeks to complete—Justice deploys teams of attorneys, architects, and investigators to take measurements of a variety of public buildings. Afterwards, Justice compiles a list of physical barriers and impediments for people with disabilities found during the on-site review. Then Justice generally negotiates and enters into a settlement agreement with the election jurisdiction, which includes recommendations for improvements, a time frame for implementing needed changes, and requirements for reporting and documentation. Between 2000 and 2008, Justice entered into 161 Civic Access settlement agreements, of which, 69 contained one or more recommendations aimed at polling place provisions. However, given the small number of Civic Assess assessments conducted annually, the information on polling place accessibility does not provide a national perspective on polling place accessibility. In addition, since these assessments are not conducted during elections, they do not assess any special features of voting areas and accessible voting systems that are set up only on Election Day. State and local election officials across the country took a considerable step toward improving voting access for people with disabilities by having accessible voting systems at virtually every polling place we visited on Election Day 2008. These voting systems have been significant in enabling some Americans with disabilities to vote privately and independently at their neighborhood polling place for the first time. This also shows that Justice’s efforts to assess states’ implementation of HAVA voting system requirements achieved the desired outcome of ensuring that polling places had at least one accessible voting system. Despite these significant efforts, voters with disabilities may have had difficulty casting a ballot on these systems because the majority of polling places still had one or more potential impediments that could prevent a voter with a disability from even getting to the accessible voting system. Furthermore, in close to half of polling places, the accessible voting system itself could pose challenges for voters with disabilities to vote privately or independently. If these conditions continue, there may be some voters with disabilities who will experience frustration and dissatisfaction with the voting process on future election days, while others could be discouraged from voting entirely. Ensuring that voters with disabilities can successfully vote privately and independently requires government to think broadly about access: how voters will arrive at the polling place, enter and move through the building, and cast a ballot using an accessible voting system. For example, just taking an accessible voting system out of its case and setting it up on any voting station is not enough if a voter using a wheelchair cannot reach it. Although Justice’s Americans with Disabilities Act: ADA Checklist for Polling Places has been widely distributed and is considered helpful by states, it only includes limited information on creating an accessible voting area and does not have guidance on configuring voting systems for people with disabilities. In addition, Justice’s current oversight of HAVA voting system requirements and polling place accessibility does not address all aspects of voting access. Without monitoring that focuses on the broad spectrum of voting accessibility for people with disabilities, it will be difficult for Justice to ensure it is meeting its oversight duties under HAVA and other federal voting statutes and to know whether voters with disabilities are being well-served. We acknowledge that extensive monitoring of polling place accessibility could be a costly and challenging undertaking. However, Justice already demonstrated its ability to leverage resources when it worked with states, disability advocacy organizations, and others to conduct its initial assessment of states’ implementation of HAVA voting system requirements. As the proportion of older Americans increases, the number of people with disabilities will also likely continue to grow, and it will become even more important to ensure that voting systems are accessible to all eligible voters. To identify and reduce the number of potential impediments and other challenges at polling places that might hinder or detract from the voting experience for people with disabilities, we recommend that the Department of Justice look for opportunities to expand its monitoring and oversight of the accessibility of polling places for people with disabilities in a cost-effective manner. This effort might include the following activities: working with states to use existing state oversight mechanisms and using other resources, such as organizations representing election officials and disability advocacy organizations, to help assess and monitor states’ progress in ensuring polling place accessibility, similar to the effort used to determine state compliance with HAVA voting system requirements by the 2006 deadline; expanding the scope of Election Day observations to include an assessment of the physical access to the voting area and the level of privacy and independence being offered to voters with disabilities by accessible voting systems; and expanding the Americans with Disabilities Act: ADA Checklist of Polling Places to include additional information on the accessibility of the voting area and guidance on the configuration of the accessible voting system to provide voters with disabilities with the same level of privacy and independence as is afforded to other voters. We provided a draft of this report to Justice, EAC, and HHS for review and comment. Justice generally agreed with our recommendation to expand its monitoring and oversight of accessibility of polling places for people with disabilities in a cost-effective manner, although it had some concerns about specific activities we suggested as part of this recommendation. Specifically, Justice generally agreed with our suggestion to work with states to use existing state oversight mechanisms and other resources to help assess and monitor states’ progress in ensuring polling place accessibility, similar to the effort it undertook shortly after HAVA was enacted. Justice said that it can look for opportunities to enhance educational efforts to states and gather some additional information to assess state accessibility programs, and work with election officials and disability rights organizations to stress the importance of polling place accessibility and ask for their assistance in improving compliance with federal requirements related to accessibility, but said that it is unlikely to have the resources for a comprehensive undertaking similar to its earlier effort. Justice also generally agreed with our recommendation to expand the scope of the Americans with Disabilities Act: ADA Checklist for Polling Places to provide additional information on ensuring the accessibility of the voting area and include guidance on the configuration of the accessible voting system. Justice expressed concerns about our suggestion to expand the scope of Election Day observations to include an assessment of the physical access to the voting area and the level of privacy and independence being offered to voters with disabilities by accessible voting systems. In particular, it had concerns about shifting the focus of the federal observer program from its primary purpose of ensuring compliance with the Voting Rights Act of 1965, and not having the resources to train and deploy observers to conduct extensive assessments of polling places on Election Day. At the same time, Justice said that it will continue to have Election Day observers and monitors note whether polling places have an accessible voting system and will consider incorporating some additional questions such as observing whether the accessible voting system appears to be situated in a way that voters can use the system privately and independently. In response, we believe that the actions we suggest to expand Justice’s monitoring and oversight activities are consistent with the agency’s stated function. As laws are enacted and revised to support voting accessibility, Justice can be positioned to fully meet its duties by modifying its assessment approaches. That stated, we believe that incorporating additional questions such as these would satisfy our recommendation and could be done without adding significant work and interfering with the primary purpose of the Election Day observer program. Justice also provided technical comments, which we incorporated as appropriate. The EAC expressed appreciation for our research and said that the report will be a valuable resource for the EAC and election officials as they continue to develop, implement, and evaluate effective election administration practices regarding voting accessibility. It also identified some of the resources that the EAC has made available to election officials and the public regarding voting accessibility, and stated that it will continue to work in collaboration with election officials, experts, and advocacy groups to identify additional resources needed to address this area. HHS said that our findings were consistent with what states have reported and the report highlights concerns that HHS has found for some of its grantees. Written comments from Justice, EAC, and HHS appear in appendixes VI, VII, and VIII. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to Justice, EAC, HHS, the U.S. Access Board, and other interested parties. In addition, the report will be made available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Barbara D. Bovbjerg at (202) 512-7215 or bovbjergb@gao.gov, or William O. Jenkins at (202) 512-8777 or jenkinswo@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IX. Our objectives were to examine (1) the proportion of polling places that have features that might facilitate or impede access to voting for people with disabilities and how these results compare to our findings from the 2000 federal election; (2) the actions states are taking to facilitate voting for people with disabilities; and (3) the steps the Department of Justice (Justice) has taken to enforce the Help America Vote Act of 2002 (HAVA) voting access provisions. To determine the proportion of polling places that have features that might facilitate or impede access to voting for people with disabilities and how these results compared to our 2000 findings, GAO staff visited polling places on Election Day, November 4, 2008, to make observations, take measurements, and conduct short interviews of polling place officials. To obtain information on our first and third objectives, we administered a Web-based survey of election officials in all 50 states, the District of Columbia, and 4 U.S. territories (American Samoa, Guam, Puerto Rico, and the U.S. Virgin Islands). For all of our objectives, we interviewed officials at Justice, the Election Assistance Commission (EAC), the Department of Health and Human Services (HHS) and from national organizations that represented election officials and disability advocacy organizations. We also reviewed federal laws, guidance, and other documentation. We conducted our work from April 2008 through September 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. On Election Day, November 4, 2008, we sent out teams of two GAO staff to each county in our sample. Each team was equipped with data collection instruments (DCI) on which to record their observations and the necessary measurement tools: the ADA Accessibility Stick II™ , a fish scale, and a tape measure. We monitored the activities of the teams throughout Election Day and provided assistance by telephone from our Washington, D.C., office. To ensure uniform data collection across the country, we trained all teams in how to properly fill out each question on the DCI, use the necessary measurement tools, and interview the chief poll worker in each polling place about the accessible voting systems as well as accommodations for voters with disabilities. See figure 14 for examples of measurements and items for observation that were used to train GAO teams for Election Day visits. We also instructed teams on the appropriate times for visiting polling places and not to approach voters or interfere with the voting process in any way during their visits. Each GAO team that visited a county on Election Day received a list of up to 8 polling places to visit. The first polling place on their list was randomly determined. We then used geocoding software and the address of the polling places to determine the latitude and longitude coordinates for all of the polling places they were scheduled to visit. The latitude and longitude coordinates were used to determine the ordering after the first polling place, which minimized the net travel distance. This geocoding of the addresses allowed the GAO teams to minimize the travel distance between their polling places on Election Day. To maintain the integrity of the data collection process, GAO teams were instructed not to disclose the location of the selected polling places before their visits. In some cases, states or counties placed restrictions on our visits to polling places. For example, laws in some states prohibit nonelection officials from entering the voting room or voting area. Election officials in several counties granted us access on the condition that we not interview polling place officials on Election Day, and, in several polling places, officials were too busy assisting voters to be interviewed. In these cases, we e-mailed and called chief polling place officials after Election Day to complete the interview. Polling place officials contacted after Election Day were asked the same questions as the officials interviewed on Election Day. Due to the constraints of time and geography, some teams were not able to visit all 8 polling places, but overall, GAO teams were able to visit 98 percent of the randomly selected polling places, or 730 of 746 polling places in 79 counties across 31 states. GAO teams used a DCI that was similar to the one used in our 2000 study of polling places to record observations and measurements taken inside and outside of the polling place and to capture responses from our interviews with chief polling place officials. However, we updated the DCI on the basis of changes that have occurred in federal laws and guidance since 2000. The primary sources we used to determine the most current requirements and standards for evaluating polling place accessibility were the voting system requirements specified in HAVA and polling place accessibility guidance in the Americans with Disabilities Act: ADA Checklist for Polling Places, issued by the Department of Justice in 2004. In addition, disability advocates and representatives of the U.S. Access Board reviewed a draft version of our DCI, and we incorporated their comments as appropriate. We also received input from officials at Justice and the EAC and from national organizations that represented election officials. Finally, to ensure that GAO teams could fill out the instrument in the field and complete it in a reasonable amount of time, we pretested the DCI during the presidential primary election in South Dakota in June 2008 and during the congressional primary election in Wisconsin in September 2008. In analyzing the data collected on Election Day, we first examined features that might facilitate or impede access on the path to the voting area. In doing so, we looked at features at four different locations at the polling place: the parking area, the path from the parking area to the building entrance, the building entrance, and the path from the building entrance to the voting area. These features included the following: Slope of ramps or cut curbs along the path are no steeper than 1:12. Surface is paved or has no abrupt changes over ½ inch. Doorway threshold does not exceed ½ inch in height. Single- or double-door openings are 32 inches or more wide. Therefore, the percentage of polling places cited as having one or more potential impediments was based on whether a polling place was found to have at least one feature that might impede access to voting in any of the four locations we examined and does not include potential impediments associated with the voting area itself. While features of the voting area were not included in our summary measure of whether a polling place had a potential impediment, we did look for features that might facilitate or impede private and independent voting inside the voting area. We identified the types of voting methods available to voters with and without disabilities and took measurements of the voting station or table used by people with disabilities to determine whether wheelchairs could fit inside the station or under the table and whether equipment was within reach for wheelchair users. We collected information on the accessible voting systems required under HAVA to determine the extent to which the system had features that might facilitate voting for people with disabilities and allow them to vote privately and independently. We also briefly interviewed chief poll workers at most of the polling places we visited to find out whether curbside voting was available and how the poll workers would handle voter requests for assistance from a friend, relative, or election official. All sample surveys are subject to sampling error, which is the extent to which the survey results differ from what would have been obtained if the whole universe of polling places had been observed. Measures of sampling error are defined by two elements—the width of the confidence interval around the estimate (sometimes called precision of the estimate) and the confidence level at which the interval is computed. The confidence interval refers to the range of possible values for a given estimate, not just a single point. This interval is often expressed as a point estimate, plus or minus some value (the precision level). For example, a point estimate of 75 percent plus or minus 5 percentage points means that the true population value is estimated to lie between 70 percent and 80 percent, at some specified level of confidence. The confidence level of the estimate is a measure of the certainty that the true value lies within the range of the confidence interval. We calculated the sampling error for each statistical estimate in this report at the 95- percent confidence level and present this information throughout the report. To learn more about states’ actions to facilitate voting access and perspectives on Justice’s oversight of HAVA voting access provisions, we administered a Web-based survey of officials responsible for overseeing elections from the 50 states, the District of Columbia, and 4 U.S. territories (American Samoa, Guam, Puerto Rico, and the U.S. Virgin Islands). Survey topics included (1) state requirements and policies for early voting, absentee voting, and voter identification; (2) state voting accommodations for people with disabilities; (3) state funding and experiences implementing HAVA voting access requirements; (4) level of interaction with Justice officials and usefulness of Justice guidance; and (5) state and local actions to facilitate voting in long-term care facilities. The survey was conducted using a self-administered electronic questionnaire posted on the Web. We collected the survey data between December 2008 and February 2009. We received completed surveys from all 50 states, 4 territories, and the District of Columbia, for a 100-percent response rate. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce nonsampling errors, such as variations in how respondents interpret questions and their willingness to offer accurate responses. To minimize nonsampling errors, we pretested draft survey instruments with state election officials in Kansas, Virginia, and Wisconsin to determine whether (1) the survey questions were clear, (2) the terms used were precise, (3) respondents were able to provide the information we were seeking, and (4) the questions were unbiased. We made changes to the content and format of the questionnaire on the basis of pretest results. Because respondents entered their responses directly into our database of responses from the Web-based surveys, possibility of data entry errors was greatly reduced. We also performed computer analyses to identify inconsistencies in responses and other indications of error. In addition, a second independent analyst verified that the computer programs used to analyze the data were written correctly. We also searched state election Web sites to illustrate their respective approaches, and obtained and reviewed relevant documentation for selected states. The scope of this work did not include contacting election officials from each state and local jurisdictions to verify survey responses or other information provided by state officials. In addition, we did not analyze states’ requirements to determine what they require, but instead relied on the states’ responses to our survey. To specifically determine what actions Justice has taken to enforce HAVA voting access provisions, we interviewed Justice officials and reviewed relevant federal laws, guidance, and other documentation. Specifically, we spoke with Justice officials in the Voting and Disability Rights Sections of the Civil Rights Division to document Justice’s internal process for handling HAVA matters and cases and to review the department’s actions to monitor and enforce HAVA voting access provisions (see app. IV for an overview of this process). We reviewed the Americans with Disabilities Act: ADA Checklist for Polling Places and informal guidance, such as letters responding to state election officials’ requests for additional guidance on HAVA voting access requirements. We also reviewed citizen complaints from Election Day 2008 that were provided to us by Justice and all three complaints containing a HAVA voting access claim that Justice has filed against states or election jurisdictions since HAVA was enacted in 2002. In addition, to learn more about the federal role in providing assistance and funding to states under HAVA, we interviewed officials from the EAC, HHS, the National Association of Secretaries of State, and the National Association of State Election Directors. Washington, D.C. Within Justice, the Voting Section’s internal process for initiating HAVA- related matters and handling cases consists of four phases: initiation, investigation, complaint justification, and litigation. While the Voting Section generally does not receive referrals from other federal agencies, many matters are initiated by allegations from a variety of sources, including citizens, advocacy and community organizations, Members of Congress, U.S. Attorney’s Offices, and news articles or through election monitoring. The Voting Section also sometimes initiates matters to monitor private lawsuits and to observe elections. The matter is assigned to an attorney under the supervision of a deputy chief or special litigation counsel for review to determine if further action is warranted. If so, a memorandum is prepared for the section chief and final approval from the Assistant Attorney General or his or her designee is required before an investigation can begin. Once the decision is made to investigate a matter, the section chief will assign a trial attorney, who conducts an investigation. When the investigation is complete, the trial attorney makes a recommendation to the section chief on whether Justice should file a lawsuit, close the matter, or participate in some other manner. The section chief is responsible for making the final decision about closing an investigation authorized by the Assistant Attorney General or recommending a lawsuit or other participation to the Assistant Attorney General. If a referral or allegation of a HAVA violation is not pursued, all appropriate parties are notified, and the matter is closed. If a decision is made to pursue a matter and recommend filing a formal complaint to initiate a lawsuit, then the trial attorney prepares a justification package. An attorney manager and the section chief are responsible for reviewing and approving the justification package. A Deputy Assistant Attorney General reviews the justification package, which is then forwarded to the Assistant Attorney General for final review and approval. The justification package is also sent to the U.S. Attorney’s office for the district where the lawsuit is to be filed for review and concurrence. If the justification package is not approved, the trial attorney generally prepares a closing memorandum and notifies the charging party, respondent, and/or referring agency, as appropriate, that Justice is not filing a lawsuit. The matter is then closed. If the justification package is approved, the Civil Rights Division notifies the defendant by letter of Justice’s intent to file a lawsuit. After the defendant has been notified, the trial attorney and the defendant often have presuit settlement discussions. If a presuit settlement is reached, a settlement document stating the points of agreement is prepared, reviewed, and approved by the Office of the Assistant Attorney General and signed by all parties. If the presuit settlement discussions do not result in a settlement, the complaint is filed in federal district court and the parties engage in litigation. Filing a complaint and the beginning of legal proceedings do not preclude the trial attorney and defendant from continuing negotiations and reaching a settlement. According to Voting Section officials, defendants often settle prior to, or during, a trial. If a trial is held, the plaintiff or defendant can often appeal the decision. If the decision is appealed, the Voting Section works closely with the Appellate Section of Civil Rights Division, which assumes responsibility for the appeal stage of the case. (LB) (UB) No designated parking for people with disabilities One or more unramped or uncut curbs <36 inches wide Other potential impediments in parking lot Path from parking area to building entrance Unpaved or poor surface in parking lot or route to building entrance Ramp in path from parking area to building entrance is steeper than 1:12 No sidewalk/path from parking area to building entrance Ramps in path from parking area to building entrance do not have a level landing at the top and bottom of each section is < 60 inches long Leaves, snow, litter in path from parking area to building entrance Sidewalk/path from parking area to building entrance <36 inches wide Ramps in path from parking area to building entrance is < 36 inches wide Steps required in path from parking area to building entrance Other potential impediments in path from parking area to building entrance Doorway threshold exceeds ½ inch in height Single doorway opening is < 32 inches wide Doors that would be difficult for a person using a wheelchair to open Double door opening is <32 inches wide, including situations in which one of the doors cannot be opened Other potential impediments at the building entrance Path from building entrance to voting area Doorway threshold exceeds ½ inch in height Single doorway opening is < 32 inches wide Corridors that do not provide an unimpeded width of at least 36 inches, but can go down to 32 inches for two feet. Location of features that might impede access to voting in a polling place (LB) (UB) We did not measure these items in 2000. We collected data on this item in 2008, following our review based on the Americans with Disabilities Act: ADA Checklist for Polling Places and per interviews with experts. We based this measurement on Justice’s ADA Standards for Accessible Design, 28 C.F.R. Part 36, Appendix A, which states that any part of an accessible route with a slope greater than 1:20 shall be considered a ramp and the maximum slope of a ramp is 1:12, except in certain cases where space limitations prohibit the use of 1:12 slope or less. Brett Fallavollita, Assistant Director, and Laura Heald, Analyst-in-Charge managed this assignment. Carolyn Blocker, Katherine Bowman, Ryan Siegel, and Amber Yancey-Carroll made significant contributions to this report in all aspects of the work. Jason Palmer, Susan Pachikara, Gretta Goodwin, Matthew Goldstein, and numerous staff from headquarters and field offices provided assistance with Election Day data collection. Carl Barden, Cathy Hurley, Stu Kaufman, George Quinn, and Walter Vance provided analytical assistance; Alex Galuten provided legal support; Paula Moore provided technical support; Jessica Orr provided assistance on report preparation; Mimi Nguyen developed the report’s graphics; and Anna Bonelli, Caitlin Croake, Kim Siegal, and Paul Wright verified our findings. Voters with Disabilities: More Polling Places Had No Potential Impediments Than In 2000, But Challenges Remain. GAO-09-685. Washington, D.C.: June 10, 2009. Elections: States, Territories, and the District Are Taking a Range of Important Steps to Manage Their Varied Voting System Environments. GAO-08-874. Washington, D.C.: September 25, 2008. Elections: 2007 Survey of State Voting System Programs. GAO-08-1147SP. Washington, D.C.: September 25, 2008. Elections: Federal Program for Certifying Voting Systems Needs to Be Further Defined, Fully Implemented, and Expanded. GAO-08-814. Washington, D.C.: September 16, 2008. Election Assistance Commission—Availability of Funds for Purchase of Replacement Voting Equipment. B-316107. Washington, D.C.: March 19, 2008. Elderly Voters: Some Improvements in Voting Accessibility from 2000 to 2004 Elections, but Gaps in Policy and Implementation Remain. GAO-08-442T. Washington, D.C.: January 31, 2008. Elections: All Levels of Government Are Needed to Address Electronic Voting System Challenges. GAO-07-741T. Washington, D.C.: April 18, 2007. Elections: The Nation’s Evolving Election System as Reflected in the November 2004 General Election. GAO-06-450. Washington, D.C.: June 6, 2006. Elections: Federal Efforts to Improve Security and Reliability of Electronic Voting Systems Are Under Way, but Key Activities Need to Be Completed. GAO-05-956. Washington, D.C.: September 21, 2005. Elections: Electronic Voting Offers Opportunities and Presents Challenges. GAO-04-975T. Washington, D.C.: July 20, 2004. Elections: A Framework for Evaluating Reform Proposals. GAO-02-90. Washington, D.C.: October 15, 2001.
Voting is fundamental to our democracy, and federal law generally requires polling places to be accessible to all eligible voters for federal elections, including voters with disabilities. However, during the 2000 federal election, GAO found that only 16 percent of polling places had no potential impediments to access for people with disabilities. To address these and other issues, Congress enacted the Help America Vote Act of 2002 (HAVA), which required each polling place to have an accessible voting system. We examined (1) the proportion of polling places during the 2008 federal election with features that might facilitate or impede access for voters with disabilities compared to our findings from 2000; (2) actions states are taking to facilitate voting access; and (3) steps the Department of Justice (Justice) has taken to enforce HAVA voting access provisions. GAO visited 730 randomly selected polling places across the country, representing polling places nationwide, on Election Day 2008. GAO also surveyed states and interviewed federal officials. Compared to 2000, the proportion of polling places without potential impediments increased and almost all polling places had an accessible voting system. In 2008, based upon our survey of polling places, we estimate that 27.3 percent of polling places had no potential impediments in the path from the parking to the voting area--up from16 percent in 2000; 45.3 percent had potential impediments but offered curbside voting; and the remaining 27.4 percent had potential impediments and did not offer curbside voting. All but one polling place we visited had an accessible voting system--typically, an electronic machine in a voting station--to facilitate private and independent voting for people with disabilities. However, 46 percent of polling places had an accessible voting system that could pose a challenge to certain voters with disabilities, such as voting stations that were not arranged to accommodate voters using wheelchairs. Most states have established accessibility requirements and funded improvements to help facilitate accessible voting, and all states reported that they required local jurisdictions to offer alternative voting methods. In 2008, 43 states reported that they required accessibility standards for polling places, up from 23 states in 2000. Additionally, most states reported that they used federal HAVA funds to improve the physical accessibility of polling places. Further, all states reported that they required local jurisdictions to offer alternative voting methods, such as absentee voting. At the same time, 31 states reported that ensuring polling place accessibility was challenging. Justice provided guidance on polling place accessibility and conducted an initial assessment of states' compliance with HAVA's January 2006 deadline for accessible voting systems. Since then, Justice's oversight of HAVA's access requirements is part of two other enforcement efforts, but gaps remain. While Justice provided guidance on polling place accessibility, this guidance does not address accessibility of the voting area itself. Justice currently conducts polling place observations for federal elections that identifies whether an accessible voting system is in place, but it does not systematically assess the physical accessibility of polling places or the level of privacy and independence provided to voters with disabilities. Justice also conducts a small number of annual community assessments of Americans with Disabilities Act compliance of public buildings, which includes buildings designated as polling places. However, these assessments do not provide a national perspective on polling place accessibility or assess any special features of the voting area and the accessible voting system that are set up only on Election Day.
The federal government uses grants to achieve national priorities through nonfederal parties, including state and local governments, educational institutions, and nonprofit organizations. While there can be significant variation among different grant programs, most federal grants share a common life cycle for administering the grants: pre-award, award, implementation, and closeout (see fig. 1). During the award stage, the federal awarding agency enters into an agreement with grantees stipulating the terms and conditions for the use of grant funds including the period of time funds are available for the grantee’s use. Also in the award stage, the awarding agency opens accounts in one of several payment management systems through which grantees receive payments. During the post-award stage, the grantee carries out the requirements of the agreement and requests payments, while the awarding agency approves payments and oversees the grantee. Once the grantee has completed all the work associated with a grant agreement or the end date for the grant has arrived, or both, the awarding agency and grantee close out the grant. Closeout procedures ensure that grantees have met all financial requirements, provided their final reports, and returned any unspent balances. Grant closeout procedures, like other stages of the grant cycle, are subject to a wide range of requirements derived from a combination of OMB guidance, agency regulations, agency policy, and program-specific statutes. OMB Circular No. A-110, Uniform Administrative Requirements for Grants and Agreements with Institutions of Higher Education, Hospitals, and Other Non-Profit Organizations, and OMB Circular No. A-102, Grants and Cooperative Agreements with State and Local Governments, provide OMB guidance to federal agencies on grant administration. These circulars apply only to federal awarding agencies; they do not apply directly to grantees. Each federal agency that awards and administers grants and agreements that are subject to the guidance in Circulars A-110 and A-102 is responsible for issuing regulations, with which grantees must comply, that are consistent with the circulars, unless different provisions are required by federal statute or are approved by OMB. Agency regulations issued under the circulars typically impose closeout procedures upon both the awarding agency and the grantee. Generally, within 90 days after the completion of the award, grantees must submit all financial, performance, and other reports as required by the terms and conditions of the award. Also within this 90-day period, grantees generally are to liquidate all obligations incurred under the award. Grantees then are to promptly refund any remaining cash balances to the awarding agency. Awarding agencies must make prompt payments, often defined as within 90 days, to grantees for allowable reimbursable costs under the award being closed out.conditions of the award, the awarding agency must make a settlement for any upward or downward adjustment to the federal share of costs after the closeout reports are received. Some federal agencies’ grant policies, such as HHS’s, further specify that grants are to be closed out within 180 days of the end of the grant funding period. Also, if allowed by the terms and While there can be substantial variation among grant programs, figure 2 illustrates how closing out grants could allow an agency to redirect resources toward other projects and activities or return unspent funds to Treasury. Generally, if the undisbursed balances that are deobligated from closed grant accounts are still available for incurring new obligations, the agency may use the funds to enter into new grant agreements.may allow the federal agencies to use existing resources to fund new grant projects. If the undisbursed amounts are returned to expired appropriation accounts, the agency may not use the deobligated funds to make new grants. However, the agency may use the deobligated funds to make adjustments to obligations that were incurred before the appropriations account expired. Expired appropriations accounts remain available for 5 years to make adjustments, after which, the undisbursed balances are canceled and returned to the Treasury. In other words, the funds are no longer available for use by the agency. This helps ensure that federal agency resources are not improperly spent and helps agencies maintain accurate accounting of their budgetary resources. It may also reduce future federal outlays relative to the federal government’s original estimated amount of spending for these programs. We found that more than $794 million in undisbursed balances remained in expired PMS accounts, including undisbursed balances that remained in accounts several years past their expiration date. Roughly three-fourths of all undisbursed balances in expired grant accounts were from grants issued by HHS, the largest grant-making agency in the federal government. Although this represents only a small share (2.7 percent) of the total funding that was made available for these grants, department officials told us they are taking action to improve timely closeout. We also found that more than $126 million in undisbursed balances remained in dormant grant accounts—accounts for which there had been no activity for 2 years or more—in ASAP, another large federal payment system. As of September 30, 2011, we found that $794.4 million in undisbursed balances remained in PMS, the largest federal civilian payment system in 10,548 expired grant accounts. These are accounts that were more than 3 months past the grant end date and had no activity for 9 months or more. Undisbursed balances in expired grant accounts were spread across numerous federal agencies and almost 400 different programs. (See app. II for a list of PMS customers.) For comparison, the total amount of undisbursed balances in expired grant accounts in PMS is more than $200 million less than the amount we previously reported for calendar year 2006, while the overall amount of grant disbursements through PMS increased by about 23 percent during this time, from $320 billion in fiscal year 2006 to $415 billion in fiscal year 2011. Overall, total undisbursed balances as of September 30, 2011, represent roughly 3.3 percent of the total amount of funds made available for these grants, down from 7.4 percent at the end of calendar year 2006. However, at the department or agency level, the total amount of undisbursed balances in expired accounts as of September 30, 2011, varied from 2.7 percent to 34.8 percent of the total funding made available for these grant accounts during this period. OMB guidance and agency regulations generally require grantees to submit all financial and performance reports and liquidate all obligations incurred under the award within 3 months (or 90 days) after the completion of the award; awarding agencies must then make prompt payments to grantees for allowable reimbursable costs for the award being closed out. Therefore, based on the information in PMS, these expired grant accounts should be considered for grant closeout. Failure to close out a grant in the payment system and deobligate any unspent balances can allow grantees to continue to draw down federal funds in the payment system even after the grant’s period of availability to the grantee has ended, making these funds more susceptible to waste, fraud, or mismanagement. As figure 3 shows, we found that undisbursed balances remained in grant accounts several years past their expiration date. We found that 991 expired grant accounts were more than 5 years past the grant end date; they contained a total of $110.9 million in undisbursed funding. Of these, 115 expired grant accounts containing roughly $9.5 million remained open more than 10 years past the grant end date. Federal regulations generally require that grantees retain financial records and other documents pertinent to a grant for a period of 3 years from the date of submission of the final report. The risk increases after several years that grantees will not have retained the financial documents and other information for these grants that are needed by federal agencies to properly reconcile financial information and make the necessary adjustments to the grant award amount and the amount of federal funds paid to the recipient, potentially resulting in the payment of unnecessary and unallowable costs. While the amount of funds remaining in individual expired grant accounts ranged from less than $1 to more than $19 million, a small percentage (a little more than 1 percent) of grant accounts with undisbursed balances of $1 million or more accounted for more than a third of the total undisbursed funds in expired grant accounts. Overall, 123 accounts from eight different federal agencies had more than $1 million in undisbursed balances at the end of fiscal year 2011. These expired grant accounts had a combined total of roughly $316 million in undisbursed balances, or 40 percent of the total undisbursed funding in expired grant accounts as of September 30, 2011 (see fig. 4). Accounts with undisbursed balances remaining at the end of the agreed-upon grant end date can indicate a potential grant management problem. Data showing grantees that have not expended large amounts of funding such as $1 million or more by the specified grant end date raise concern that grantees have not fully met the program objectives for the intended beneficiaries within the agreed- upon time frames. Roughly three-fourths of all undisbursed balances in expired grant accounts ($594.7 million) in PMS as of September 30, 2011, were from 8,262 HHS-issued grants. HHS is the largest grant-making agency in the federal government in terms of total dollars awarded and disbursed. Overall, the total undisbursed balances in expired HHS grant accounts represented 2.7 percent of the total amount authorized for these accounts, which is the lowest percentage for any federal department with undisbursed balances in expired grant accounts included on the September 30, 2011 PMS closeout report. This indicates that the grantees have typically spent the vast majority of the funds awarded. However, the remaining funds add up to hundreds of millions of dollars that the agency could potentially redirect toward other projects and activities or return to Treasury. Furthermore, 85 of the 123 expired grant accounts with $1 million or more remaining at the end of fiscal year 2011 discussed earlier in this report were HHS-issued grants. Of the 10 HHS operating divisions with accounts in PMS, the Administration for Children and Families (ACF) and the Centers for Disease Control and Prevention (CDC) had the largest undisbursed balances at the end of fiscal year 2011 with roughly $321.7 million and $110.1 million, respectively. While HHS policy generally requires that grants be closed out within 180 days after the grant’s end date, we found more than $265 million in undisbursed balances in expired grant accounts that remained open 3 or more years past the grant end date. This includes more than $86 million in expired grant accounts that were 5 years or more past the grant end date, of which more than $7 million remained unspent 10 years after the grant end date (see fig. 5). $70.8 million in undisbursed balances in expired grant accounts that were 5 years or more past the grant end date, including $6.1 million that remained unspent 10 years after the grant end date. HHS Grants Policy Directive 4.02 outlines the department’s grants management requirements for closeout. In response to past audit reports, officials from HHS’s Division of Grants In February said that they have increased monitoring of grant closeout.2011, HHS established an interagency workgroup—the Accelerated Closeout Team—led by the Office of Grants and Acquisition Policy and Accountability to coordinate a departmentwide response in strengthening financial controls and accelerating the number of grant and contract closeouts. The Accelerated Closeout Team for grants reviewed and analyzed PMS data from previous years and used the data to develop a list of eligible grant awards—focusing specifically on those from fiscal year 2008. They have a near-term goal of closing out all eligible grants with a grant end date of 2008. According to HHS, they have identified tens of millions of dollars in undisbursed balances in PMS available for deobligation through this initiative. The initiative will conclude later this year at which point HHS will re-evaluate any additional areas requiring specific attention. HHS officials said that they are drafting a departmentwide grants closeout policy to improve the grant closeout process going forward. HHS officials said that attention on timely grant closeout in PMS increased in response to previous audits. Both the HHS Office of Inspector General and the HHS independent auditor have reported a backlog of expired HHS grant accounts with undisbursed balances in PMS. The HHS Inspector General issued four reports from 2008 to 2009 on grant closeout in PMS at four selected operating divisions. Using PMS data from March 30, 2006, to March 31, 2007, the HHS Inspector General found between $174 million and $1.3 billion in undisbursed balances at the four operating divisions in grant accounts that had not been closed within 180 days of the grant end date as specified in agency policy. The HHS Inspector General attributed the backlog in grant closeout in part to lack of staff and resources, inconsistent guidance, and a lack of supporting documentation and recommended that the agency use the information in the audit reports to ensure that grants are closed out in a timely manner and to eliminate the backlog of grants eligible for closeout. The operating divisions generally concurred with the Inspector General’s recommendations and described actions that they planned to take to improve timely closeouts in response. Findings from HHS’s independent auditor as reported in the agency’s PARs over several years indicate that timely closeout of grants has been a long-standing issue at HHS but that the agency has been making progress. From fiscal year 2006 to fiscal year 2011, the HHS independent auditor routinely reported on concerns with management controls over grant closeout, including a backlog of HHS grant accounts in PMS that were already beyond what the auditor considered a reasonable time frame for closeout. For example, during its review of fiscal year 2009 grant activity provided from PMS as of March 31, 2009, the independent auditor identified approximately 644 grant obligations totaling $40.3 million that were dated prior to fiscal year 2002 that had not been closed out. The independent auditor concluded at that time that HHS management needed to increase its emphasis on closeout in order to reduce the backlog and ensure consistency between PMS and HHS operating divisions’ separate grant tracking systems, and, as part of the department’s fiscal year 2011 PAR, the independent auditor noted significant improvements in this and other financial management processes. Promptly closing out grants in the payment management system after the grant end date would help agencies minimize the amount that they are charged in monthly service fees. PSC, which operates PMS, does not close out a grant account in PMS until instructed to by the awarding agency and continues to charge service fees to the awarding agencies. PMS fees are calculated to allow PSC to fully recover the cost of its PMS operations. In addition to payment services, PMS also provides a number of other services to assist users, such as standardized electronic forms for meeting federal grant reporting requirements, audit support, and collection services on overdrawn grants and disallowed costs. PSC provides these additional services for all open accounts, regardless of the grant account balance. PSC charges federal grant-making agencies based on two billing rates: a hybrid rate referred to as the “Type I” rate, which is generally applied to grants awarded to state, local, and tribal governments, and a flat rate referred to as the “Type II” rate, which is generally applied to grants awarded to nonprofit agencies, hospitals, and universities. We identified more than 28,000 expired grant accounts in PMS with no undisbursed balances remaining as of the end of fiscal year 2011 for which the grant-making agency was charged a fee. More than 21,000 of these expired grant accounts with no undisbursed funds remaining— approximately 79 percent of all such accounts—were for HHS grants with the remaining amount spread across 11 other federal agencies. The closeout report made available to PMS users identifies these accounts using a special status symbol, which indicates that the awarding agency only needs to submit the closeout code to finalize grant closeout. Until the code is submitted, these grant accounts continue to cost the awarding agency through accumulated monthly service fees. According to data provided by PSC, PMS users were charged a total of roughly $173,000 per month to maintain the more than 28,000 expired grant accounts with zero dollar balances listed on the yearend closeout report. Roughly $137,000 of this was charged to HHS operating divisions. Overall, the total charges for all expired grants with a zero dollar balance would represent roughly $2 million in fees if agencies were billed for these accounts for the entire year. While the fees are small relative to the size of the original grant awards, they can accumulate over time. We found roughly 9,770—about 34 percent—of the expired grant accounts with no undisbursed balances remained open 3 or more years past the grant expiration date. If the grant has otherwise been administratively and financially closed out, then agencies paying fees for expired accounts with zero dollar balance are paying for services that are not needed instead of providing services to grant recipients. The presence of expired grant accounts with no undisbursed funds remaining also raises concerns that administrative and financial closeout—the final point of accountability for these grants, which includes such important tasks as the submission of financial and performance reports—may not have been completed. As of the end of fiscal year 2011, we found that $126.2 million in undisbursed balances remained in dormant grant accounts in ASAP, another large federal payment system. These balances remained in 1,094 dormant grant accounts—accounts for which there had been no activity for 2 years or more. According to the dormant account report, this represents roughly 15 percent of the cumulative authorized funding made available for these accounts. Grant accounts for eight federal departments and other federal entities that use the ASAP system for payment services appeared on the report, with undisbursed balances ranging from roughly $41,000 to more than $40 million, per entity. (See app. III for a list of ASAP customers.) Individual accounts in the ASAP system can include multiple grant agreements between a federal agency and a grantee; therefore, these reports cannot be used to identify individual grants eligible for closeout or the amount of funds that remain undisbursed for an individual grant agreement. However, the existence of undisbursed balances in inactive accounts can indicate the need for increased attention. This is particularly true of accounts where there has been no activity for a prolonged period of time. While nearly three-quarters of the undisbursed balances in dormant accounts were inactive for 3 years or less, we found roughly $33 million in 430 accounts that had been inactive for 3 years or more. Of that $33 million, $11 million in 179 accounts had been inactive for 5 years or more (see fig. 6). FMS officials first began issuing “dormant account reports” to all ASAP users in 2009 in response to the findings in our 2008 report that using federal payment systems to track undisbursed balances in grant accounts could help reduce unused funding. ASAP dormant account reports have evolved over time to improve their usability. Currently, accounts with undisbursed balances are included in dormant account reports if (1) the grantee has not drawn down funds for 2 years or more and (2) the awarding agency has made no changes to the authorized amount of funding available to the grantee in 2 years or more. Dormant account reports are generally provided twice a year—once in the fall or winter followed by a second report in the spring or summer. The first report lists all of the dormant accounts as of a specific date, and the second report shows the status of these same accounts several months later, allowing agencies to track progress toward addressing the dormant accounts that appeared on the first report. The amounts reported for the end of fiscal year 2011 represent the first phase of this two-phase cycle. Unlike PMS, the ASAP system does not provide grant management operations for users; therefore, it is the agencies’ responsibility to maintain grant management information such as the grant end date. However, as with PMS, the separation of grant management and payment functions makes it is possible for agencies to closeout a grant in a separate grant management system but fail to close out the grant in the ASAP system. According to FMS officials, if an ASAP account remains open, grantees may be able to continue to draw funds so long as there are funds available in the account. ASAP accounts that have no balances remaining but remain open are not included in dormant account reports regardless of their period of inactivity. FMS has encouraged agencies to close these accounts, but it does not charge users for these accounts or for other payment system services provided by the ASAP system. Instead, Congress appropriates funds to FMS to cover the cost of its operations. In addition to the HHS audits described earlier, we and agency IGs have continued to raise concerns about timely grant closeout in federal agencies and grant programs. As part of our previous report on undisbursed balances in expired grant accounts issued in 2008, we reviewed 7 years of past audits and found that both we and federal IGs issued numerous reports indentifying specific grant programs or awarding agencies that had undisbursed funding in grants eligible for closeout. Since that time, we have issued additional reports identifying challenges related to timely closeout of grants, and the Inspectors General at the Departments of Agriculture (USDA), Education, Energy (DOE), HHS, Homeland Security (DHS), and Labor (Labor) have all issued reports identifying similar challenges in offices or programs within their respective agencies. These reports identified a lack of adequate systems or policies in place to properly monitor grant closeout and inadequacies in awarding agencies’ grant management processes, in part because closeouts are a low management priority. While they focused on expired grants in specific offices or grant programs, when taken together, these report findings indicate that the timely closeout of grants continues to be an issue for multiple programs and grant-making agencies across the federal government. We found that agencies did not have adequate systems and policies in place to properly monitor grant closeout. For example, in 2011, we reported that USDA’s draft grant closeout policies for the McGovern-Dole Food for Education Program did not include time frames for when grant agreements should be closed. As a result, this put USDA at risk that grant agreements will not be closed out in a timely fashion, preventing USDA from ensuring that grantees of the McGovern-Dole Food for Education Program have met all financial requirements and that unused or misused funds are promptly reimbursed to USDA. We recommended that the Secretary of Agriculture formalize policies and procedures for closing out grant agreements and establishing guidance to determine when agreements should be closed. USDA agreed with our recommendations and said it will take steps to address them. Similarly, in 2011 we found that roughly $24 million in Farm Labor Housing program loan and grant obligations remained undisbursed more than 5 years after the funds were obligated and that the Rural Housing Service had no guidelines for deobligation in force.since issued guidance, as we had recommended. The Rural Housing Service has We also found that agencies did not deobligate funds from grants eligible for closeout in a timely manner. For example, in 2012, we reported that Department of Justice’s (DOJ) Bulletproof Vest Partnership program had not deobligated about $27 million in balances from grants awarded from fiscal years 2002 through 2009 whose terms have ended and whose grantees are no longer eligible for reimbursement. DOJ agreed with our recommendation that the department deobligate undisbursed funds from Bulletproof Vest Partnership program grants that have closed and said that in the absence of statutory restrictions stating otherwise, it intends to use the deobligated, undisbursed funds to supplement appropriations in fiscal years 2012 and 2013. In another example, we reported in 2010 that recipients of 58 percent of Department of the Interior’s Office of Insular Affairs project grants failed to submit final closeout reports on time, which can delay the deobligation of any unspent grant funds from the project account. The Department of Interior agreed with our recommendations to improve the Office of Insular Affairs’ ability to manage grants. Federal IGs identified similar issues at their agencies. For example, in September 2009 the Inspector General at Labor reported that funds were not deobligated when a grant expired because of delays in grant closeouts. Also, grants from the Employment and Training Administration and the Veteran’s Employment and Training Service were not closed within 12 months of their expiration because of a large backlog of grants in need of close out. Service reported in April 2009 that it deobligated the $2.75 million in response to a finding from the department’s inspector general, making the funds available for other research projects and preventing the potential misuse of funds. IGs also reported that system updates and a lack of timely information led to problems at DHS and the Department of Education, respectively. Department of Labor, Office of the Inspector General, Management Advisory Comments Identified in an Audit of the Consolidated Financial Statements for the Year Ended September 30, 2009, 22-10-006-13-001 (Washington, D.C.: 2010). Federal IGs reported that grant closeout procedures have been viewed as a low priority for federal agencies and that agencies have devoted limited staff resources to other grant management functions, including the issuance of new grant awards. Lack of attention and staffing contribute to delays in grant closeout and the timely deobligation of funds. For example, DOE’s Inspector General found that one of DOE’s regional offices was not closing out Small Business Innovation and Research Phase II grants in a timely manner in part because staff focused their attention instead on active awards. The Inspector General found expired grants had been completed for more than 3 years but had not been closed out. In addition, the Inspector General found questionable or unallowable costs during their review of grant closeout. Because grantees are only required to maintain annual audit and expense reports to support progress on projects and costs incurred and other information for 3 years, the supporting cost data may not be available for review, resulting in the payment of unnecessary and unallowable costs. These findings are consistent with the results of a survey of IGs and other investigative agencies by the National Procurement Fraud Task Force’s Grant Fraud Committee, a committee chaired by the Inspector General for DOJ, which aims to detect and prevent grant fraud. Many respondents to the survey suggested that grant awarding agencies are often focused on awarding grant money and do not devote sufficient resources to the oversight of how those funds are spent. Survey respondents noted that awarding agencies often inadequately monitor grantee activities by, among other things, not properly closing out grants in a timely manner. OMB has not issued governmentwide guidance on tracking or reporting undisbursed balances for grants eligible for closeout, as we recommended in 2008. OMB did issue instructions for tracking and reporting on undisbursed grant balances to a small number of affected federal agencies in 2010 and 2011 as required by law. However, this guidance included grant accounts that were still available for disbursement and was not limited only to those grant accounts eligible for closeout. We found that agencywide information on undisbursed balances in grant accounts eligible for closeout is largely lacking. In 2008, we recommended that OMB instruct all executive departments and independent agencies to annually track the amount of undisbursed balances in expired grant accounts and report on the status and resolution of the undisbursed funding in their annual performance reports. In our report, expired grant accounts were defined as the grants that remained open after the end of the grant period and were eligible for closeout. Our previous work found that reporting on the status of grant closeouts in annual performance reports, such as agency PARs, can raise the visibility of the problem within federal agencies, lead to improvements in grant closeouts, and reduce undisbursed balances. These reports enable the president, Congress, and the American people to assess agencies’ accomplishments for each fiscal year by comparing agencies’ actual performance against their annual performance goals, summarizing the findings of program evaluations completed during the year, and describing the actions needed to address any unmet goals, among other things. OMB responded at the time that it supported the intent of our recommendations to strengthen grants management by explicitly requiring federal agencies to track and report the amount of undisbursed grant funding remaining in expired grant accounts and that it believed agencies should design processes with strong internal controls to promote effective funds management for all types of obligations. OMB’s comments did not indicate a commitment to implement our recommendations. OMB stated that, during its regular review, it would consider revising the grant management guidance in Circulars No. A-102 and No. A-110 to include such instructions. As of December 2011, these Circulars, as well as No. A-11, Preparation, Submission and Execution of the Budget, and No. A-136, Financial Reporting Requirements, do not include any guidance or instructions to agencies on tracking or reporting on undisbursed balances in grants eligible for closeout in agencies’ performance reports. Section 537 of the Commerce, Justice, Science, and Related Agencies Appropriations Act of 2010 required that the Director of OMB instruct departments, agencies, and other entities receiving funds under the act to track undisbursed balances in expired grant accounts. The legislation specifically required that OMB instruct affected agencies to report on the following information: 1. details on future action the department, agency, or instrumentality will take to resolve undisbursed balances in expired grant accounts, 2. the method that the department, agency, or instrumentality uses to track undisbursed balances in expired grant accounts, 3. identification of undisbursed balances in expired grant accounts that may be returned to the Treasury of the United States, and 4. in the preceding 3 fiscal years, details on the total number of expired grant accounts with undisbursed balances (on the first day of each fiscal year) for the department, agency, or instrumentality and the total finances that have not been obligated to a specific project remaining in the accounts. These legislative reporting requirements were similar to what we recommended in 2008. Subsequently, the same reporting requirements were carried forward for fiscal year 2011 by the Full-Year Continuing Appropriations Act, 2011 and for fiscal year 2012 by Section 536 of the Commerce, Justice, Science, and Related Agencies Appropriations Act, 2012, affecting select agencies 2012 PAR and AFR submissions due in November 2012. In 2010 and 2011, as required by these laws OMB issued implementing instructions to affected federal agencies’ financial officers and budget officers. Four agencies—the Department of Commerce (DOC), DOJ, National Aeronautics and Space Administration (NASA), and National Science Foundation (NSF)—provided responses in their annual performance reports. However, in its instructions, OMB equated “expired grant accounts” with expired appropriation accounts. Specifically, OMB’s guidance referenced the definition of expired appropriations found in Circular No. A-11 in defining expired grant accounts as “including budget authority that is no longer available for new obligations but is still available for disbursement.” The performance period for active grant agreements can last multiple years during which time authorized disbursements may be made from expired appropriation accounts. Under OMB’s definition, agencies were instructed to report all undisbursed funding in expired appropriation accounts which could include active grant accounts as well as grant accounts eligible for closeout. In contrast, in this and other reports, we defined expired grant accounts as accounts that remain open after the specified grant end date, or expiration date, and are eligible for close out. government has obligated by entering into a grant agreement but that should no longer be disbursed to grantees because the period of availability to the grantee has ended. See GAO-08-432 and GAO, Federal Grants: Improvements Needed in Oversight and Accountability Processes, GAO-11-773T (Washington, D.C.: June 23, 2011). appropriations in the agency’s two research-related appropriations accounts. The amount reported included funds available for disbursement on only active grant agreements. Similarly, officials from DOJ and NASA also confirmed to us that the number they reported in their 2010 performance reports represented balances in expired appropriations accounts and not the amount of funding that remained in grant accounts eligible for close out. Furthermore, according to DOJ officials, most DOJ grants, with the exception of grants funded through the American Recovery and Reinvestment Act (Recovery Act), are funded with no-year appropriations that do not enter into an expired phase and therefore fall outside the scope of OMB’s guidance. Agency officials told us that the purpose of gathering information on grants funded with expired appropriations was unclear. Federal agencies are generally required to include detailed information on the overall budgetary resources made available to the agency, including amounts in expired appropriation accounts, as well as the status of those resources at the end of the fiscal year. Agency officials said that the information on undisbursed balances reported in their PAR or AFR was derived at least in part from these publicly available budgetary reports and is generally readily available; however, information on undisbursed balances in grant accounts that have reached their end date and are eligible for closeout is generally not publicly available or otherwise provided to OMB and Congress. OMB issued largely identical instructions to select agencies for reporting on undisbursed balances in expired grant accounts in their 2011 performance reports. While NASA and NSF took different approaches in reporting compared to the prior year, DOJ reported on the amount of undisbursed funding in expired appropriations. DOC reported undisbursed balances, but could not confirm whether all of its grant- making bureaus reported expired appropriations or grant accounts. NASA officials said that the number reported in their 2011 PAR represented the amount of undisbursed balances in grant accounts that have reached their end date and are eligible for closeout. Based on this understanding of the guidance, NASA reported in its 2011 PAR that in 2009 there were about 1,650 expired grants with $18 million in undisbursed balances. In comparison, when reporting on amounts in expired appropriations in their previous year’s PAR, NASA reported over 2,100 expired grants with $58 million in undisbursed balances for 2009. In its 2011 PAR, NSF reported on neither the amount of grants funded with expired appropriations nor on the amount of undisbursed balances in grant accounts that have reached their end date and are eligible for closeout. Instead, NSF reported the amount of funding that was deobligated as a result of successfully closing out grants. For example, NSF reported that in fiscal year 2011, the agency closed out a total of 18,648 grants. As a result, $35,204,328 in undisbursed balances were deobligated and retained for adjustments to existing obligations and an additional $5,610,546 was deobligated and returned to Treasury.illustrates how closing out grants allows an agency to redirect unspent funds or return the funds to Treasury as appropriate. It does not, however, provide information on the number of grants past their grant end date or the balances remaining in these grant accounts. In our review of CFO Act agencies’ annual performance reports for fiscal years 2009 to 2011, we found that systematic, agencywide information on undisbursed balances in grant accounts eligible for closeout is largely lacking in part because OMB guidance does not provide explicit instructions to agencies to track undisbursed balances for grants that are eligible for close out. Other than the four agencies receiving explicit instructions from OMB and the information reported by independent auditors, we found only one federal agency—the Environmental Protection Agency (EPA)—reported agencywide information on the timeliness of grant closeout. EPA developed an agencywide performance metric—the percentage of eligible grants closed out—in part as a response to our prior findings that the agency had a large backlog of grants in need of closeout.out 99.5 percent of eligible grants from 2009 and earlier and 93.4 percent In its 2011 AFR, EPA reported that it closed of grants that expired in the prior fiscal year. As part of our prior work we concluded that while EPA’s performance measure did not assess compliance since it did not reflect the 180-day closeout standard, the measure was a valuable tool for determining if grants were ultimately closed. EPA does not provide information in its AFR on the amount of undisbursed funds that remain in expired grants. While we have noted progress in EPA’s recovery of funds from expired grants in our prior work, we have also observed that EPA’s budget justification documents do not describe the amount of deobligated funding available for new obligations; such information could be useful to Congress because the availability of these funds could partially offset the need for new funding. We found that information on timely grant close out in other agencies’ performance reports was limited to sections of the performance reports prepared by independent auditors, where two agencies’ auditors raised concerns related to timely grant closeout. Our analysis shows that there has been an improvement in closing out expired grant accounts with undisbursed balances in PMS since our 2008 report. Undisbursed balances in these accounts declined from roughly $1 billion at the end December 2006 to a little more than $794 million at the end of September 2011, despite a significant increase in annual grant disbursements through PMS during this time. However, more work needs to be done to further improve the timeliness of grant closeout and reduce undisbursed balances. In our 2008 report, we found that agencies can improve their grant closeout process when they direct their attention to the issue and make timely grant closeout a high priority. Since this time, HHS has increased attention on grant closeout, and both the agency and its independent auditor have reported that progress has been made toward addressing the agency’s existing backlog of grant accounts in PMS eligible for closeout. The dormant account report developed by Treasury offers further encouragement by raising agencies’ awareness of undisbursed balances in inactive grant accounts in the ASAP system. We have found that agencies can raise the internal and external visibility of the issue of undisbursed balances and improve performance by reporting on undisbursed balances in grants that are eligible for closeout in agencies annual performance reports. However, the number of agencies that have voluntarily provided this information in their performance reports is limited. We therefore reiterate our previous recommendation, not yet implemented, that OMB should instruct all executive departments and independent agencies to report on the status and resolution of the undisbursed funding in grants that have reached the grant end date in their annual performance reports, the actions taken to resolve the undisbursed funding, and the outcomes associated with these actions. OMB’s implementation of Section 537 of the Commerce, Justice, Science, and Related Agencies Appropriations Act of 2010 and subsequent legislation creates a framework for such reporting. However, interviews with agency officials and variations in agencies’ responses to OMB’s instructions indicate that additional clarification is needed, particularly to the definition of “expired grant accounts,” if this information is to be effectively used by agency management, OMB, and Congress to address the backlog of grants in need of closeout. The definition included in guidance issued by OMB equates “expired grant accounts” with grants funded with expired appropriations and therefore includes active grant agreements still in the implementation phase for which the agency would have valid reasons to make future disbursements. By instead focusing on undisbursed balances obligated to grant agreements that have reached the end of their period of performance and are eligible for closeout, OMB could better direct agency management focus toward a subset of grants in need of more immediate attention. OMB could also better direct agency management’s focus by putting an emphasis on grants that have not been closed out several years past their expiration date. As time passes, these funds become more susceptible to improper spending or accounting as monitoring diminishes over time. OMB’s guidance currently does not address grants with no undisbursed balances remaining. The presence of tens of thousands of expired grant accounts in PMS with no undisbursed funds remaining raises concerns that these accounts are not receiving sufficient attention. Reducing the number of accounts with zero balances remaining would help ensure that administrative and financial closeout—the final point of accountability for these grants—is being completed. It would also minimize the amount agencies pay in potential fees for maintaining these accounts, which can accumulate over time. In addition to the previous recommendation reiterated above, we recommend that the Director, OMB, take the following three actions: Revise the definition of “undisbursed balances in expired grant accounts” in future guidance issued to agencies, including those required to report under Section 536 of the Commerce, Justice, Science, and Related Agencies Consolidated Appropriations Act, 2012, to focus on undisbursed balances obligated to grant agreements that have reached the grant end date and are eligible for closeout, as described in this report. Instruct agencies with undisbursed balances still obligated to grants several years past their grant end date to develop and implement strategies to quickly and efficiently take action to close out these grants and return unspent funds to the Treasury when appropriate. Instruct agencies with expired grant accounts in federal payment systems with no undisbursed balances remaining to develop and implement procedures to annually identify and close out these accounts to ensure that all closeout requirements have been met and to minimize any potential fees for accounts with no balances. We provided a draft of this report to the Administrator of the National Aeronautics and Space Administration; the Attorney General; the Director of the National Science Foundation; the Acting Director of the Office of Management and Budget; and the Secretaries of Commerce, Health and Human Services, and Treasury. OMB staff provided the following comments via e-mail: “OMB is in general agreement with GAO’s recommendation in regards to providing better guidance for agencies in the management and closeout of expired grants with undisbursed balances. We are in the process of reviewing and streamlining our grant policy guidance to the agencies and will consider these recommendations.” The Chief Financial Officer and Assistant Secretary for Administration at DOC and the Assistant Secretary for Legislation at HHS responded with written comments, which we have reprinted in appendixes IV and V. Staff at the other agencies provided technical or clarifying comments, which we incorporated as appropriate, or had no comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Administrator of the National Aeronautics and Space Administration; the Attorney General; the Director of the National Science Foundation; the Acting Director of the Office of Management and Budget; and the Secretaries of Commerce, Health and Human Services, and Treasury. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you have any questions about this report, please contact Stanley J. Czerwinski at (202) 512-6806 or czerwinskis@gao.gov or Beryl H. Davis at (202) 512-2623 or davisbh@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. Our objectives for this report were to evaluate: (1) the amount of undisbursed funding remaining in expired grant accounts including the amounts that have remained unspent for 5 years or more and for 10 years or more, (2) issues raised by GAO and federal inspectors general (IG) related to timely grant closeout by federal agencies, and (3) what actions the Office of Management and Budget (OMB) and agencies have taken to track undisbursed balances in grants eligible for closeout. To address the first objective, we analyzed data from two federal payment systems: the Payment Management System (PMS) administered by Department of Health and Human Services’ (HHS) Program Support Center (PSC) and the Automated Standard Application for Payments (ASAP) system administered jointly by the Department of the Treasury (Treasury) and the Federal Reserve Bank of Richmond. Federal payment systems facilitate the transfer of cash payments from federal awarding agencies to grantees. Some agencies make grant payments directly to grantees using their own proprietary payment systems, while others enter into arrangements with payment systems that serve multiple agencies to make payments on their behalf. The PMS and ASAP systems were selected for review based on the following criteria: 1. These payment systems provide payment services to other federal departments and entities. In 2011, offices from 13 federal departments and other federal entities used PMS for making grant disbursements, and offices from 9 federal departments and other federal entities used ASAP for grant disbursements. See appendixes II and III for a full list of federal entities that use PMS and ASAP for payment services. 2. These payment systems account for a significant percentage of civilian federal grant disbursements. Based on fiscal year 2010 data, the most recently available at the time of our selection, PMS made about $411 billion in grant disbursements, or 68 percent of all civilian federal grants disbursements in fiscal year 2010, and ASAP made payments of an additional $45 billion in grant disbursements, or 7 percent of all civilian federal grant disbursements in that year. In 2011, PMS and ASAP disbursed $415 billion and $62 billion in federal grant funding, respectively, or 79 percent of all civilian federal grant disbursements in fiscal year 2011. PMS is a centralized grant payment and cash management system, operated by HHS’s Program Support Center (PSC) in the Division of Payment Management (DPM). According to DPM, the main purpose of PMS is to serve as the fiscal intermediary between awarding agencies and the recipients of grants and contracts. Its main objectives are to expedite the flow of cash between the federal government and recipients, transmit recipient disbursement data back to the awarding agencies, and manage cash flow advances to grant recipients. PSC personnel operate PMS, making payments to grant recipients, maintaining user/recipient liaison, and reporting disbursement data to awarding agencies. Awarding agencies’ responsibilities include entry of authorization data into PMS, program and grant monitoring, grant closeout, and reconciliation of their accounting records to PMS information. Awarding agencies pay PSC a service fee for maintaining accounts and executing payments through PMS. PMS continues to charge agency customers a servicing fee until an account is closed. To update our previous analysis of undisbursed balances in expired grant accounts and provide a degree of comparability, we replicated the methodology used in our 2008 report. Namely, to determine the amount of undisbursed balances in expired grant accounts, we analyzed PMS data from closeout reports PSC makes available to PMS customers each quarter. These closeout reports list all expired grant accounts that, according to the data system, have not completed all of their closeout procedures. An account is considered expired in PMS if (1) the grant end date is more than 3 months old and (2) the latest date of disbursement was at least 9 months old. PMS does not close a grant account until instructed to do so by the awarding agency. For each grant account, the report includes such information as the identification number, the amount of funding authorized for the grant, the amount disbursed, and the beginning and end dates for the grant. The grant end date is a mandatory field completed by the awarding agency. PSC provided us with the PMS quarterly closeout report for the end of fiscal year 2011 (September 30, 2011). PSC appended to the closeout data an additional field showing the applicable number from the Catalog of Federal Domestic Assistance (CFDA) for each grant account. We used the CFDA number provided by PSC to help determine which accounts to exclude from our analysis. The purpose of these exclusions was to avoid including accounts that would distort the calculation of undisbursed funds in expired PMS grant accounts and to provide comparability with our previous findings. Our criteria for excluding accounts were consistent with We excluded a total of 115 the methodology we used in our 2008 report.grant programs—both HHS and non-HHS—based on the following: We excluded accounts from our analysis that did not have a defined end date. The purpose of the PMS closeout report is to alert awarding agencies of accounts in PMS that remain open after their posted end date. If a grant does not have a defined end date, such as the Temporary Assistance for Needy Families, then HHS staff consider the PMS closeout report merely as a reminder to the awarding agency of the open account and that PMS continues to charge fees on this open account. We excluded expired accounts associated with the following HHS block grant programs: Community Mental Health Services Block Grant, Preventive Health and Health Services Block Grant, Substance Abuse and Preventive Treatment Block Grant, Maternal and Child Health Services Block Grant, Social Services Block Grant, Low Income Housing Energy Assistance Block Grant, and Community Services Block Grant. An independent audit of PMS stated that (1) the funds for these block grants continued to be available to the grantees until the obligation/expenditure period expired, and (2) traditional financial reporting requirements do not apply to these programs. We excluded grant accounts with a negative undisbursed balance, meaning that total payments to the grantee exceed the authorized amount. According to PSC officials, an overadvancement on a PMS account can occur if the awarding agency reduces a grant’s authorization limit below the amount already paid to the grantee because the awarding agency determines that the grant recipient is entitled to a lesser amount than the agency originally authorized. Agencies use their accounting systems to send authorization transactions to PMS. If an agency’s authorization transaction will create an overadvanced account in PMS, the transaction is sent to an exception file for review. The agency must override the exception to transmit an authorization transaction that causes an overadvanced account. According to officials from PSC, agencies will do so (1) if they want PSC to initiate a collection action to recover the overadvanced amount or (2) for grantees with multiple grant accounts in PMS that are “pooled,” to redistribute charges to open grant accounts to correct the overadvanced grant. We excluded accounts that were excluded in our 2008 analysis because the CFDA number and program description had been deleted from the Catalog before 2000 (the last Catalog entry would have been in 1999) or we could not find any information on the CFDA number either in CFDA or in the CFDA Historical Index, which provides the history of all CFDA numbers. We excluded accounts if we could not associate them with a grant program. For instance, we found some PMS accounts that, based on the most recent CFDA, were for nongrants. We included expired accounts that were associated with grants or cooperative agreements that had a time limit for spending the funds. We also included accounts for letters of credit. According to PSC officials, almost all accounts in PMS are grants or cooperative agreements with the exception of a few letters of credit. The recipient of a letter of credit may not be required to meet the same performance reporting requirements as the recipient of a grant, but, as with grants, they are required to meet certain reporting requirements, such as submitting a Federal Financial Report (SF-425). Letters of credit, according to PSC officials, also have end dates in PMS comparable to grants and follow the same the same closeout procedures in PMS. PSC informed us that it would not be able to exclude letters of credit from the data they provide us. For reporting purposes, we separated data into two sets of expired grant accounts: (1) one set consisted of expired accounts for which all of the funds made available had been disbursed and (2) a second set of accounts that included expired accounts with a positive undisbursed balance. To obtain an estimate of the total amount of fees paid for maintaining accounts with no undisbursed balances remaining, we requested data from PSC for all accounts that appear on the year-end fiscal 2011 closeout report (i.e., as of September 30, 2011) with a unique accounting status symbol indicating that no undisbursed balances remained and that the awarding agency only needed to submit the final closeout code to PSC to finalize grant closeout. According to data provided by PSC, PMS users were charged a total of roughly $173,000 per month to maintain more than 28,000 expired grant accounts with no undisbursed balances remaining listed on the year-end closeout report. Roughly $137,000 of this was charged to HHS operating divisions. The closeout report provided by PSC does not provide information on when the authorized funds in these accounts were fully disbursed. However, more than 9,000 of these grants were more than 3 years past their end date. For illustrative purposes, we multiplied the monthly fees for these accounts by 12 to obtain a rough approximation of what the total annual fees charged for these accounts would be assuming that all accounts with no undisbursed balances remaining balance as of September 30, 2011, had a zero dollar balance for the entire fiscal year. To test the reliability of PMS closeout data, we (1) reviewed existing documentation related to PMS, including the most recent audit of the design and operating effectiveness of the system’s controls, (2) interviewed officials responsible for administration of the database on data entry and editing procedures and the production of closeout reports, and (3) conducted electronic testing for obvious errors in completeness and accuracy. An independent auditor assessed internal controls for PMS in 2011 and reported that, with one exception, the controls were suitably designed to provide reasonable assurance the control objectives would be achieved if the controls operated effectively. We discussed with HHS officials data entry and editing procedures, the production of closeout reports, and any known limitations associated with the data. According to HHS officials, no-cost extensions that extend the grant period without changing the authorized amount of funding may not be reflected in PMS data. As a result, PMS closeout reports may include grants that have received an extension and are therefore not eligible for closeout. No obvious errors in completeness and accuracy were identified during electronic testing. After conducting these assessment steps, we found that the PMS closeout data were sufficiently reliable for the purposes of this report. ASAP is an electronic payment system implemented jointly by the Department of the Treasury’s (Treasury) Financial Management Service (FMS) and the Federal Reserve Bank of Richmond. ASAP allows grantee organizations receiving federal funds to draw from accounts preauthorized by federal agencies. In addition to grants, ASAP is also used to make payments to financial agents that are performing financial services for FMS and other federal agencies. For example, ASAP can be used for reimbursing financial institutions for payments made by federal agencies through debit cards. Agencies establish and maintain accounts in ASAP to facilitate the flow of funds to organizations. Unlike PMS grant accounts, which represent an individual grant agreement between a federal agency and grantee, accounts in ASAP can represent multiple grant agreements between an awarding agency and a grantee. Individual grant agreements within these accounts may have reached their grant end date, while others may not have. Therefore, the ASAP system cannot be used to determine which individual grants are eligible for closeout. FMS officials began issuing “dormant account reports” to all ASAP users in 2009 as a response, Treasury officials told us, to the findings in our 2008 report that using federal payment systems to track undisbursed balances in grant accounts can help reduce unused funding. Dormant account reports provide information on all inactive ASAP accounts, including accounts for nongrant programs. For the purposes of this report, our focus was on accounts for grant programs only. The first dormant account report focused only on undisbursed balances in accounts where the grantee had not drawn down funds for a prolonged period of time. However, the criteria used by FMS for generating dormant account reports have evolved over time to improve the usability of the reports. For the dormant account report provided to us for the end of fiscal year 2011, accounts with undisbursed balances were included if: (1) the grantee had not drawn down funds for at least 2 years, and (2) the awarding agency had made no changes to the authorized amount of funding available for at least 2 years. For each grant account, the report includes information such as the identification number, the account balance, the cumulative amount of funding authorized to the grantee, and the date of the last payment request. The grant account end date is an optional field completed by the awarding agency. FMS did not include accounts with no balance remaining on this report, but has encouraged agencies to close these accounts if they are no longer active. FMS does not charge users for these accounts or for other payment system services provided by the ASAP system and instead receives appropriations to cover the cost of its operations. According to FMS officials, dormant account reports are generally provided twice a year, allowing agencies to track progress on addressing inactive accounts. The first report lists all of the dormant accounts as of a specific date, and the second report shows agencies’ progress toward addressing dormant accounts included on the first report. For this report, we reviewed the most recently available dormant account report for the end of fiscal year 2011. This report listed all ASAP accounts that have not had any activity (i.e., no payment requests and no funding added or removed) since September 30, 2009. To test the reliability of ASAP dormant account report data, we: (1) reviewed existing documentation related to the ASAP system, (2) interviewed officials responsible for administration of the database on data entry and editing procedures and the production of dormant account reports, and (3) conducted electronic testing for obvious errors in completeness and accuracy. We discussed with FMS officials internal control testing and other quality review procedures for the ASAP system as well as dormant account reports. We also discussed missing data in certain fields on the dormant account report identified during electronic testing to ensure that omissions did not indicate potential errors. After conducting these assessment steps, we found that the data from dormant account reports were sufficiently reliable for the purposes of this report. To address our second objective, we collected and reviewed audit reports issued by GAO from September 2007 to May 2011 and by the offices of inspectors general at the 24 Chief Financial Officers Act (CFO Act) agencies from January 2008 to June 2011, which since the issuance of our 2008 report, had focused on undisbursed funds in expired accounts. We reviewed IG reports from the 24 CFO Act agencies in order to provide coverage of the major grant-making agencies and because this approach updated the review we performed as part of the work on our 2008 report, which included IG reports issued between 2000 and 2006. Based on our review for this report, we identified IG reports on HHS, Departments of Energy and Homeland Security, and the Environmental Protection Agency with findings of weaknesses related to undisbursed grant balance, or grant closeout. We then interviewed IG officials at these four agencies to discuss their findings and any plans to conduct future audits on undisbursed grant funds. We also interviewed IG officials from four additional agencies—the Department of Commerce (DOC), Department of Justice (DOJ), National Aeronautics and Space Administration (NASA), and National Science Foundation (NSF)—where agency management had reported on undisbursed balances in expired grant accounts in the agencies’ 2010 and 2011 annual performance reports, as described below. Finally, we followed up with each of the IG Offices at the remaining 16 CFO Act agencies via e-mail to ensure we had obtained any relevant reports and to determine if they had any plans to conduct future audits related to undisbursed grant funds, or grant closeouts. As a result of this follow up, we identified two additional IG reports related to timely grant closeout at the Departments of Agriculture and Labor. To analyze actions agencies have taken to track undisbursed balances in expired accounts, we reviewed annual performance reports for all 24 agencies required to issue audited financial statements under the CFO Act from fiscal years 2009 to 2011. The 24 CFO Act agencies were responsible for the vast majority—more than the 95 percent—of grant programs identified in the CFDA database as of June 2, 2011. We performed a keyword search to determine if the agency, its office of inspector general, or the independent auditor had reported on undisbursed balances in expired grant accounts or the timely closeout of grant accounts. We also reviewed the Performance and Accountability (PAR) and Agency Financial Reports (AFR) of four entities receiving funds under the Commerce, Justice, Science, and Related Agencies Act provision of P.L.111-117 for compliance with relevant reporting requirements in Section 537. To address our third objective, we reviewed relevant OMB guidance and regulations from federal grant-making agencies. Specifically, we reviewed the OMB Circulars No. A-102, Grants and Cooperative Agreements with State and Local Governments, and No. A-110, Uniform Administrative Requirements for Grants and Other Agreements with Institutions of Higher Education, Hospitals, and Other Non-Profit Organizations. Each federal agency that awards and administers grants and cooperative agreements that are subject to the guidance in Circulars A-102 and A-110 is responsible for issuing regulations that are consistent with the circulars, unless different provisions are required by federal statute or are approved by OMB. We reviewed regulations from federal grant-making agencies that have codified governmentwide grants requirements, as identified on OMB’s website, to determine (1) the length of time prescribed for closing out federal grants and (2) the length of time federal grantees are required to retain records related to grant awards. To identify federal governmentwide guidance related to federal agency performance reporting, we reviewed OMB Circular No. A-11, Preparation, Submission and Execution of the Budget and Circular No. A-136, Financial Reporting Requirements. We also reviewed two memoranda related to tracking undisbursed balances in expired grant accounts issued by OMB in October 2010 and August 2011 to select agencies receiving funding under the Commerce, Justice, Science, and Related Agencies appropriations act, as required by law. OMB to discuss the purpose and scope of their guidance and officials at the four agencies that reported undisbursed balances in expired grant accounts in 2010 and 2011 annual performance reports—DOC, DOJ, NASA, and NSF—to discuss their implementation of OMB’s instructions. Pub. L. No. 111-117 and Pub. L. No. 112-10. Appendix II: Federal Agencies Using the Payment Management System (PMS) for Grant Payments (as of June 2011) Appendix III: Federal Agencies Using the Automated Standard Application for Payments (ASAP) System for Grant Payments (as of June 2011) In addition to the individuals named above, Phyllis L. Anderson, Assistant Director, Thomas M. James, Assistant Director, Thomas J. McCabe, Analyst-in-Charge, and Andrew Y. Ching, Travis P. Hill, Jennifer Leone, Omari A. Norman, Susan Ragland, Cynthia M. Saunders, and Michael Springer made major contributions to this report.
In 2008,GAO reported that about $1 billion in undisbursed funding remained in expired grant accounts in the largest civilian payment system for grants, PMS, operated by the Department of Health and Human Services’ Program Support Center. GAO was asked to update its 2008 analysis evaluating: (1) the amount of undisbursed funding remaining in expired grant accounts, including the amounts that have remained unspent for at least 5 years or more and for 10 years or more; (2) issues raised by GAO and federal inspectors general related to timely grant closeout by federal agencies; and (3) actions OMB and agencies have taken to track undisbursed balances in grants eligible for closeout. To do this, GAO analyzed data from two federal payment systems disbursing 79 percent of all civilian federal grant awards—PMS and the ASAP system, which is operated jointly by the Department of the Treasury and the Federal Reserve Bank of Richmond. In addition, GAO also reviewed audit reports that it and federal inspectors general issued; relevant OMB circulars and guidance; and performance reports from federal agencies. At the end of fiscal year 2011, GAO identified more than $794 million in funding remaining in expired grant accounts—accounts that were more than 3 months past the grant end date and had no activity for 9 months or more—in the Payment Management System (PMS). GAO found that undisbursed balances remained in some grant accounts several years past their expiration date: $110.9 million in undisbursed funding remained unspent more than 5 years past the grant end date, including $9.5 million that remained unspent for 10 years or more. GAO also found $126 million in grant accounts in the Automated Standard Application for Payments (ASAP) for which there had been no activity for 2 years or more, including $11 million that remained inactive for 5 years or more. However, data from these two systems are not comparable because, unlike PMS, ASAP accounts can include multiple grant agreements between a federal agency and a grantee, only some of which may be eligible for closeout. GAO and agency inspectors general have raised concerns in audit reports about timely grant closeout. These reports found that some agencies lack adequate systems or policies to properly monitor grant closeout or did not deobligate funds from grants eligible for close out in a timely manner. OMB issued guidance to certain agencies at the direction of Congress for reporting undisbursed balances in expired grant accounts that instructed agencies to report on expired appropriations accounts rather than grant accounts eligible for closeout. By focusing on grants eligible for closeout, OMB could better direct agency management toward grants in need of more immediate attention. Grant closeout makes funds less susceptible to fraud, waste, and mismanagement; reduces the potential costs in fees related to maintaining grants; and may enable agencies to redirect resources to other projects. GAO recommends that OMB revise future guidance to better target undisbursed balances in grants eligible for closeout and instruct agencies to take action to close out grants that are several years past their end date or have no undisbursed balances remaining. OMB staff said that they generally agreed with the recommendGAO recommends that OMB revise future guidance to better target undisbursed balances in grants eligible for closeout and instruct agencies to take action to close out grants that are several years past their end date or have no undisbursed balances remaining. OMB staff said that they generally agreed with the recommendations and will consider them as they review and streamline grant policy guidance.ations and will consider them as they review and streamline grant policy guidance.
Farming is an inherently risky enterprise. In conducting their operations, farmers are exposed to both production and price risks. Crop insurance is one method farmers have of protecting themselves against these risks. Over the years, the federal government has played an active role in helping to mitigate the effects of these risks on farm income by promoting the use of crop insurance. Federal crop insurance began on an experimental basis in 1938, after private insurance companies were unable to establish a financially viable crop insurance business. The federal crop insurance program is designed to protect farmers from financial losses caused by events such as droughts, floods, hurricanes, and other natural disasters as well as losses resulting from a drop in crop prices. The Federal Crop Insurance Corporation (FCIC), an agency within USDA, was created to administer the federal crop insurance program. Originally, crop insurance was offered to farmers directly through FCIC. However, in 1980, Congress enacted legislation that expanded the program and, for the first time, directed that crop insurance—to the maximum extent possible—be offered through private insurance companies, which would sell, service, and share in the risk of federal crop insurance policies. In 1996, Congress created an independent office called RMA to supervise FCIC operations and to administer and oversee the federal crop insurance program. Federal crop insurance offers farmers various types of insurance coverage to protect against crop loss and revenue loss. Multiperil crop insurance is designed to minimize risk against crop losses due to nature—such as hail, drought, and insects—and to help protect farmers against loss of production below a predetermined yield, which is calculated using the farmer’s actual production history. Buy-up insurance, the predominant form of coverage, provides protection at different levels, ranging from 50 to 85 percent of production. Catastrophic insurance provides farmers with protection against extreme crop losses. Revenue insurance, a newer crop insurance product, provides protection against losses in revenue associated with low crop market prices in addition to protecting against crop loss. RMA, through FCIC, pays a portion of farmers’ premiums for multiperil and revenue insurance, and it pays the total premium for catastrophic insurance. However, farmers still must pay an administrative fee for catastrophic insurance. RMA determines the amount of premium for each type of insurance policy by crop. RMA, through FCIC, contracts with private insurance companies who then sell these policies to farmers. Companies sell crop insurance to farmers through agents. An agent, a person licensed by the state in which the agent does business to sell crop insurance, is employed by or contracts with a company to sell and service eligible crop insurance policies. While most companies pay their agents a commission to sell and service crop insurance policies, some companies pay agents a salary. American Growers paid its agents a commission. RMA establishes the terms and conditions to be used by private insurance companies selling and servicing crop insurance policies to farmers through a contract made with the companies called the SRA. The SRA is a cooperative financial assistance agreement between RMA, through FCIC, and the private crop insurance companies to deliver federal crop insurance under the authority of the Federal Crop Insurance Act. Under the SRA, FCIC reinsures or subsidizes a portion of the losses and pays the insurance companies an administrative fee or expense reimbursement—a preestablished percentage of premiums—to reimburse the companies for the administrative and operating expenses of selling and servicing crop insurance policies, including the expenses associated with adjusting claims. While the reimbursement rate is set at a level to cover the companies’ costs of selling and servicing crop insurance policies, the companies have no obligation to spend their payment on expenses related to crop insurance, and they may spend more than they receive from FCIC. The current reimbursement rates, set by statute, are based on recommendations in our 1997 report of the costs associated with selling and servicing crop insurance policies. However, RMA does not have a process for regularly reviewing and updating these rates. RMA is currently conducting a limited review of companies’ expenses to validate the costs of selling and servicing federally reinsured crop insurance policies. RMA, through FCIC, is the reinsurer for a portion of all policies covered by the federal crop insurance program. Reinsurance is sometimes referred to as insurance for insurance companies. It is a method of dividing the risk among several insurance companies through cooperative arrangements that specify ways in which the companies will share risks. Reinsurance serves to limit liability on specific risks, increase the volume of insurance policies that may be written, and help companies stabilize their business in the face of wide market swings in the insurance industry. As the reinsurer, RMA shares the risks associated with crop insurance policies with companies that sell federal crop insurance. However, if a crop insurance company is unable to fulfill its obligations to any federal crop insurance policyholder, RMA, as the ultimate guarantor for losses, assumes all obligations for unpaid losses on these policies. Reinsurance is also available through private reinsurance companies. Crop insurance companies must maintain certain surplus levels to issue crop insurance policies. However, they may increase their capacity to write policies and may further reduce their risk of losses by purchasing reinsurance from private reinsurance companies on the risk not already covered by FCIC. American Growers was originally established in 1946 as Old Homestead Hail Insurance Company. The company went through several reorganizations and name changes between 1946 and 1989. In 1989, the company became American Growers Insurance Company, operating as a subsidiary of the Redland Group, an Iowa-based insurance holding company. Acceptance Insurance Companies Inc., (Acceptance)—a publicly owned holding company that sold specialty property and casualty insurance—acquired American Growers in 1993. As a wholly owned subsidiary of Acceptance, American Growers was primarily responsible for selling and servicing federal crop insurance policies and shared the same general management as the parent organization. Another wholly owned subsidiary of Acceptance, American Agrisurance Inc., served as the marketing arm for American Growers. American Growers’ failure was the result of a series of company decisions that reduced the company’s surplus, making it vulnerable to collapse when widespread drought erased anticipated profits in 2002. The company’s decisions were part of an overall management strategy to increase the scope and size of American Growers’ crop insurance business. The company’s surplus declined due to losses and other costs from mistakes made when introducing a new crop insurance product, decisions to pay higher than average agent commissions, and the purchase of a competitor’s business. Additionally, the company’s operating expenses were about 1 1/3 times its reimbursement from RMA. In other words, American Growers was spending $130 for every $100 it was receiving from RMA to pay for selling and servicing crop insurance. American Growers planned to use profits from policy premiums to pay for the expenses not covered by RMA’s reimbursement. When these gains did not materialize due to widespread drought, the company’s surplus dropped below statutory minimums, prompting NDOI to take control of the company. First, the company introduced a new crop insurance product, but mistakes associated with the sale of this product resulted in significant losses in the company’s surplus. In 1997, the company chose to market a new crop insurance product, Crop Revenue Coverage Plus (CRC Plus), which was a supplement to federal crop insurance, but which was not reinsured by RMA. In 1999, American Growers expanded the sale of this product into rice, a crop with which it had little experience. When the company realized it had mis-priced the product for rice and withdrew the product, farmers who had planned on using CRC Plus sued the company. Financial losses, legal settlements, and other costs related to CRC Plus caused significant losses in the company’s financial surplus. Appendix II provides further details on the losses associated with CRC Plus. Second, American Growers chose to spend more than RMA reimbursed it for selling and servicing crop insurance, in part, because the company chose to pay above-average agent commissions in order to attract more agents to sell for the company. As part of its effort to expand operations, the company in 2000 to 2002, paid agent commissions about 12 percent higher, on average, than those offered by other crop insurance companies. In addition to paying agent commission rates above the average of other companies in the industry, American Growers offered agent sales incentives, such as trips to resort locations, and funded other expenses not required to sell and service federal crop insurance. These expenses, among others, created operating costs that were 11 percent greater than the average operating costs of other companies selling crop insurance, and these expenses exceeded the reimbursement RMA provided companies. Appendix III provides additional details of the high operating costs associated with agent commissions and other expenses. Third, the company purchased the crop business of a competitor, which increased its expenses. In 2001, American Growers attempted to expand its share of the crop insurance market by purchasing assets from another company, including that company’s book of crop insurance business. Because American Growers was unable to achieve the operational efficiencies it had anticipated, this acquisition resulted in additional operating costs and expenses that were higher than the reimbursement that RMA provided companies to cover the sale and service of crop insurance. Appendix IV provides additional details on the operating expenses incurred from the purchase of a competitor’s crop insurance business. Finally, the company relied on large underwriting gains to pay for its expenses, rather than RMA’s reimbursement. When these gains did not materialize due to widespread drought in 2002, the company’s surplus dropped to a level that prompted NDOI to take control of the company. In its 2002 operating budget, American Growers projected profits in excess of its 10-year average and relied on these anticipated profits to cover the company’s operating expenses and to further its growth. The company’s profit projections were based, in part, on retaining a higher percentage of the risk for the policies it sold than in past years. By retaining a higher percentage of the risk on the policies, American Growers could increase its profits if claims were low. Conversely, the company increased its exposure to loss if claims were high. However, profits did not materialize as the result of widespread drought, which caused overall federal crop insurance program losses to increase from $3 billion in 2001 to $4 billion in 2002. When American Growers’ expenses and losses dropped the company’s surplus below statutory minimums, NDOI declared the company to be in a hazardous financial condition and took control of the company—first placing the company under supervision in November 2002 and then in rehabilitation in December 2002. Appendix V provides additional details on the decline in American Growers’ surplus. At the time of American Growers’ failure, RMA’s financial oversight processes were inadequate to identify the full extent of financial weaknesses of insurance companies participating in the federal crop insurance program. RMA’s actual oversight procedures focused primarily on whether a company had sufficient surplus to pay claims based on its past performance, rather than the overall financial health and outlook of the company. In addition, RMA did not generally share information or coordinate with state regulators on the financial condition of companies participating in the federal crop insurance program. Although RMA reviewed companies’ operational plans and selected financial data, such as annual financial statements, in the case of American Growers, RMA was unaware that the company was projecting underwriting gains in excess of historic averages to pay for its operating expenses. The company’s failure to achieve these gains resulted in a substantial reduction in its surplus and its subsequent financial failure. In the case of American Growers, RMA and NDOI did not begin cooperating on overseeing the company until it had been placed into supervision in November 2002. In 2002, when American Growers failed, data provided to RMA by the companies participating in the federal crop insurance program provided an overall picture of company operations and complied with RMA’s regulations. However, the information provided was typically 6 to 18 months old; and, according to an RMA official, the agency’s oversight focused primarily on whether a company had financial resources to pay claims on crop insurance policies and not on the overall financial health of the company. RMA’s approach to financial oversight stemmed, in part, from the fact that the companies participating in the program are private and are licensed and regulated by state insurance departments. State insurance departments are responsible for monitoring the overall financial condition of companies chartered and licensed to operate in their state. In addition, some of the companies selling crop insurance are affiliated with holding companies or other related companies, which RMA does not review for financial soundness. Since American Growers’ failure, RMA has begun requiring federal crop insurance companies to provide additional financial data to help the agency determine if companies are adequately financed to perform their obligations under their SRAs. One of RMA’s primary responsibilities is to ensure the integrity and stability of the crop insurance program, in part, by monitoring insurance companies’ compliance with program criteria such as submitting statutory statements required by state regulators and meeting certain financial ratios, as defined in federal regulations. To ensure that the companies participating in the federal crop insurance program sell and service insurance policies in a sound and prudent manner, the Federal Crop Insurance Act requires crop insurance companies to bear a sufficient share of any potential policy loss. Title 7, Code of Federal Regulations, chapter IV, contains the general regulations applicable to administering the federal crop insurance program. The SRA between RMA and participating crop insurance companies establishes the terms and conditions under which RMA will provide subsidy and reinsurance on crop insurance policies sold or reinsured by insurance companies. These terms and conditions state, in part, that companies must provide RMA with accurate and detailed data, including their (1) annual plan of operation, (2) financial statements filed with the applicable state insurance regulator, and (3) any other information determined necessary for RMA to evaluate the financial condition of the company. When approving a company to participate in the crop insurance program, RMA analyzes it according to 16 financial ratios set forth in RMA regulations. Combined, these 16 ratios are intended to provide RMA a reasonable set of parameters for measuring insurance companies’ financial health, albeit generally from a historical perspective. The 16 financial ratios include such things as (1) change in net writings, (2) 2-year overall operating ratio, (3) change in surplus, and (4) liabilities to liquid assets. Ten of the 16 ratios specifically refer to changes related to companies’ surplus—the uncommitted funds used to cover policy claims. When a company fails more than 4 of the 16 financial ratios, RMA requires the company to submit an explanation for the deviation and its plans to correct the situation. If the explanation appears reasonable, RMA approves the company to sell and service crop insurance for the next crop year. In August 2001, RMA notified American Growers that the company had 6 ratios, based on its December 2000 financial statement, that fell outside acceptable ranges, including its 2-year overall operating ratio, change in surplus, and 2-year change in surplus. Table 1 shows the 6 ratio requirements and American Growers’ ratio for each of the 6 ratios it failed. According to an RMA memorandum dated October 2001, American Growers reported that most of its unacceptable ratios were due primarily to underwriting losses related to its multiperil crop insurance that produced unfavorable results due to drought conditions in 2000, particularly in Nebraska and Iowa, and the impact of the federally subsidized reimbursement not covering the company’s expenses. Additionally, American Growers cited the cost of the class-action lawsuit relating to its CRC Plus product as a contributing factor. Finally, American Growers explained that the expansion of its crop operations through the purchase of a competitor’s crop insurance business was expected to provide efficiencies that would reduce expenses and help improve the company’s profitability in the future. Based on American Growers’ explanations, RMA determined that the company’s 2002 SRA should be approved. RMA did not believe that the adverse developments that American Growers had experienced were significant enough to move the company close to insolvency. RMA’s decision was partially based on anticipated improvements in overall performance resulting from American Growers’ acquisition of another company’s assets and the potential for achieving greater economies of scale. Furthermore, while American Growers failed more than 4 of the 16 financial ratios, it was not the only company with such results. Of the 18 companies participating in the federal crop insurance program in 2002, other companies had a higher number of failed ratios than American Growers, though most had fewer. Specifically, of the other 17 companies, 3 companies had 7 or more failed ratios, 1 had 6—the same number as American Growers, and 13 companies had 4 or fewer failed ratios. In March 2002, American Growers had 5 ratios, based on its December 2001 financial statement, that fell outside acceptable ranges, including change in net writings, 2-year overall operating ratio, and liabilities to liquid assets. Table 2 shows the 5 ratio requirements and American Growers’ ratio for each of the 5 ratios it failed. American Growers cited its acquisition of its competitor’s crop insurance business, the adverse development of its CRC Plus settlement, and the delay in its reinsurance payments due from RMA as the primary reasons for failing these ratios. Based on the company’s explanation of why it had failed the 5 ratios, in June 2002—5 months before American Growers’ financial failure—RMA determined that American Growers met the standards for approval to sell and service crop insurance policies for 2003. In 2002, as in 2001, although American Growers failed to meet more than 4 ratios, as required by the SRA, its performance was not unlike some other companies. Of the 19 companies participating in the crop insurance program in 2003, 2 companies had 8 or more failed ratios, 2 had 5—the same number as American Growers, and 14 companies had 4 or fewer failed ratios. Although RMA routinely reviewed the financial documents required under the SRA, we found the agency’s financial oversight procedures inadequate to fully assess American Growers’ financial condition. RMA reviewed the company’s surplus and reinsurance arrangements and approved the company to write policies for the 2003 crop year, based on this analysis. However, RMA was unaware that American Growers was projecting profits in excess of historic averages to pay for its operating expenses and that its failure to achieve these profits would mean that the company’s surplus would be inadequate to absorb resulting operating losses and could result in the financial failure of the company. One reason RMA was unable to identify deficiencies in American Growers’ finances was because, following the agency’s emphasis on companies’ compliance with program criteria, RMA only reviewed a company’s historical financial information and its ability to pay claims on the basis of the company’s past surplus and its private reinsurance agreements. For example, RMA’s decision to approve companies to participate in the federal crop insurance program for 2002 (July 2001 – June 2002) was based on the company’s financial information as of December 31, 2000. Further, while RMA required companies to submit an operation plan showing projected policy sales, RMA did not require a company to provide operating budget projections for the upcoming year. As a result, RMA’s approval decisions were generally based on a company’s past financial performance rather than a forward-looking perspective of a company’s financial health. Without knowing the details of a company’s projected operating budget including its acquisition plans and the financial conditions of affiliated, parent, or subsidiary companies, RMA did not have a complete picture of the company’s financial condition. Thus, RMA was unable to adequately identify or take action to lessen any risks that may have been developing in companies with deteriorating profits, as was the case in American Growers. We believe that this lack of information impaired RMA’s decision-making process; therefore, the agency was forced to make decisions based on incomplete, narrowly focused, and dated information. Subsequent to the financial failure of American Growers, RMA took several steps to improve its oversight and analysis of the financial condition of companies currently participating in the federal crop insurance program. For example, in 2003, RMA started requesting more comprehensive budget and cash flow information from participating companies, which provides the agency a more forward-looking perspective of the companies’ financial health. Specifically, RMA will require insurance companies to provide their estimated underwriting gains or losses for the coming year; copies of all risk-based capital reports; and a signed statement identifying any potential threats to the company’s ability to meet its obligations for current and future reinsurance years, along with the possible financial ramification of such obligations. In addition, RMA is revising the SRA in its efforts to address some of the shortcomings of the current SRA. Although RMA officials said the agency plans to continue requesting more comprehensive information from crop insurance companies and had developed a financial analysis plan, as we concluded our review, the agency did not have formal written policies and procedures in place incorporating these changes. In a November 2003 memorandum to RMA’s administrator, USDA’s Office of Inspector General provided general comments and suggestions for RMA’s consideration in its renegotiation of the current SRA. Some of the suggestions to improve the SRA included requiring companies to provide (1) “revenue and expense forecast budget data for the forthcoming year as a part of the plan of operations approval process, including agents’ commission rates and salary and other compensation for top company officials,” (2) “information relating to any planned acquisition of other crop insurance companies,” and (3) “the financial roles that will be played by parent/subsidiary companies in the crop insurance operations.” RMA did not routinely coordinate with state regulators regarding the financial condition of companies participating in the federal crop insurance program. RMA’s contact with state regulators was ad hoc and primarily limited to episodes during the introduction of new crop products or company acquisitions. RMA did not discuss the financial status of companies with regulators, but it would have been prevented from doing so because it lacked an agreement with state insurance regulators regarding the sharing of confidential financial and examination records. Companies selling and servicing crop insurance under the federal crop insurance program are subject to the regulations of the state where the company is chartered as well as federal regulations. According to NAIC, a state regulators’ primary responsibilities are to protect the public interest; promote competitive markets; facilitate the fair and equitable treatment of insurance consumers; promote the reliability, solvency, and financial solidity of insurance institutions; and enforce state regulation of insurance. State regulators, among other things, require companies to file periodic information regarding their financial condition, including the adequacy of their surplus to cover claim losses, and the solvency of the company. Prior to the failure of American Growers, RMA did not routinely coordinate with state regulators regarding companies’ financial condition. Also, RMA did not have a written policy or information-sharing agreements that would allow state insurance regulators to share sensitive financial information about crop insurance companies with the agency. According to several state regulators, RMA did not routinely share information or otherwise coordinate with state regulators to determine the financial health of a company. According to another state regulator, RMA and the state have talked when a company was introducing a new crop insurance product; however, the regulator could not remember sharing information with RMA about the financial operations of companies participating in the federal crop insurance program. Furthermore, the state regulators with whom we spoke said that any policy promoting coordination would be of limited value unless the states and RMA established a written agreement allowing the state regulators to share confidential business information with RMA. RMA’s lack of an agreement for sharing information with NDOI prevented the state from disclosing sensitive business information on American Growers. NDOI officials identified financial and management weaknesses directly or indirectly affecting American Growers during its periodic reviews as early as 2000. Beginning in 2001, and continuing through August 2002, NDOI was internally discussing the possibility of conducting a targeted examination of Acceptance, including its subsidiary—American Growers. However, in September 2002, due to other priorities and resource constraints, NDOI decided to postpone an on-site examination of the company until 2003. RMA called the state insurance regulator in May 2002, and again in September 2002, asking whether there were any special inquiries or actions pending by the state regarding American Growers and whether American Growers was listed on the state’s list of companies at risk. NDOI acknowledged to RMA that it had asked American Growers to provide additional information regarding its first quarterly submission for 2002; however, NDOI explained that this was not unusual because a number of other companies also had outstanding inquiries. NDOI explained that most of its information is considered public and could be furnished to RMA if requested. However, NDOI’s work products, including its list of companies most at risk, company examination reports, and associated work papers were considered confidential. As a result, NDOI required that a confidentiality agreement be signed before they could share the information. On September 20, 2002, NDOI began drafting a confidentiality agreement so it could share information about American Growers with RMA. However, this agreement was not completed before American Growers’ failure. Since the failure of American Growers, RMA has begun working with NAIC on draft language for confidentiality agreements that would allow state regulatory agencies to share confidential business information with RMA. However, at the conclusion of our review, no written confidentiality agreements had been formalized. RMA worked with NDOI to effectively manage the failure of American Growers by ensuring that policyholder claims were paid and crop insurance coverage was not disrupted. However, servicing the company’s crop insurance policies cost RMA more than $40 million for such things as paying agent commissions and staff salaries. Further, RMA lacked a written policy that clearly defined its relationship to state actions in handling company insolvencies. While NDOI accommodated RMA’s interests by not immediately liquidating American Growers’ assets so that policyholders could be served, without a written agreement in place, other actions such as liquidation could have limited RMA’s flexibility to protect policyholders and maintain stability in the federal crop insurance program. RMA effectively protected American Growers’ policyholders after the company’s failure by ensuring that farmers’ claims were paid and that their crop insurance coverage was not disrupted. After NDOI obtained an order of supervision, NDOI and RMA signed a memorandum of understanding that specified that American Growers, under NDOI appointed management, would pay claims and service policies with American Growers’ funds. RMA signed an amendment to American Growers’ 1998 SRA and agreed to reimburse the company for continued expenses associated with paying or servicing crop insurance claims when American Growers’ available cash accounts—about $35 million—dropped to $10 million or below. RMA began day-to-day oversight of American Growers in conjunction with NDOI at the company’s Council Bluffs, Iowa, offices on January 6, 2003. The purpose of the oversight was, among other things, to ensure the timely payment of claims, the timely collection of premiums, the efficient transfer of 2003 business to other insurance companies, and the review and approval of the company’s employee retention plan and payments to creditors. RMA worked with NDOI to keep American Growers in rehabilitation rather than liquidate the company because RMA was concerned that if NDOI chose to liquidate the company RMA may not have a mechanism to expeditiously pay claims and transfer American Growers’ policies to other insurance providers. Continuity of coverage is critical to policyholders because they must provide proof of insurance coverage in order to secure loans and obtain credit to plant the next year’s crops. Policyholders may become ineligible for crop insurance for 1 year if their coverage is terminated. RMA was concerned that if American Growers was liquidated, policyholders would not be paid for their losses and their coverage would lapse, making them ineligible for continued crop insurance coverage. While the SRA provides that RMA could take control of American Growers’ crop insurance policies, it did not have an effective way to service these policies. On December 18, 2002, RMA issued procedures for transferring existing policies written under American Growers to other insurance providers approved under the federal crop insurance program. Under these procedures, American Growers was to notify its agents that all of its policies must be placed with another insurance provider. The agents had the primary responsibility to transfer the policies. By April 2003, RMA transferred or assigned a total of 349,185 policies—all which were eligible—to other companies in the federal crop insurance program reflecting about $576.4 million in premiums. Any American Growers’ policy that was not transferred voluntarily to a new insurance provider was assigned by RMA on a random basis to a provider that was currently writing insurance in the applicable state. Less than 8 percent of the policies had to be assigned to other insurance providers because the policy or agent had not acted on them, or because paperwork errors interfered with their transfer. For the fall and spring crop seasons combined, agents or policyholders transferred about 323,000 policies, and RMA assigned about 26,500 policies. RMA worked in conjunction with NDOI and remaining American Growers’ staff to ensure that 52,681 claims totaling about $410 million were paid. About $400 million of these claims were paid by March 2003. The claims that were filed resulted from policyholder losses from the 1999 through 2003 crop seasons—primarily the 2002 crop season. A month-by-month presentation of this information is presented in appendix VI. The cost of servicing American Growers’ crop insurance policies, which included the administrative and operating costs of paying claims and transferring policies, totaled about $40.5 million as of March 2004 (see table 3). These costs included agent commissions, office space leases and rental equipment, payroll for remaining American Growers’ staff, severance pay, and other expenses. Six former American Growers’ employees remained on-site to respond to information requests associated with paid claims and transferred policies, to process remaining claims, and to produce end-of-year financial statements. RMA would like to recoup some of these costs by (1) obtaining revenues that could be derived from the liquidation of American Growers’ assets by NDOI, if that should occur, and (2) requesting that NDOI provide RMA with any portion of the company’s cash reserves—totaling about $7 million as of February 2004—that may remain before the company is liquidated. However, according to NDOI, RMA’s standing as a creditor in the case of liquidation is unclear, and RMA does not know to what extent, if any, it can recoup its costs from these financial sources. At the time of American Growers’ failure, RMA did not have a written policy defining its financial roles and responsibilities in relationship to state actions in the event of an insurance provider insolvency. While the SRA provides that RMA may take control of the policies of an insolvent insurance company to maintain service to policyholders and ensure the integrity of the federal crop insurance program, state regulators’ decisions may constrain RMA’s ability to efficiently protect policyholders. In the case of American Growers, an RMA official reported that NDOI made it clear that it had no choice, given the weakened financial condition of the company, but to liquidate American Growers unless RMA funded the company until all the policies had been serviced. If the state had liquidated the company, it would have sold all the company’s property and assets, creditors may have initiated legal actions over the existing assets (including premiums owed by policyholders), and there was the possibility of a freeze on the payment of any claims. Furthermore, liquidation would have left RMA with a number of crop policies to service, with no way of servicing them. RMA decided that the best course of action was to reach an agreement with NDOI to stave off liquidation by reimbursing NDOI for all costs associated with the servicing of policies until all 2002 policies had been serviced and until all producers had found new insurance providers for the 2003 crop year. Fortunately, NDOI accommodated RMA’s interests by allowing RMA to fund the operation of the company long enough to pay farmers’ claims and transfer policies. However, other actions available to the state could have increased RMA’s costs or limited RMA’s flexibility in protecting policyholders. When an insurance provider becomes insolvent, the SRA provides that RMA will gain control of its federally funded crop insurance policies and any premiums associated with those policies. However, as the case of American Growers demonstrates, RMA is not prepared to assume such responsibility. RMA was concerned, among other things, that it lacked sufficient staff and other capabilities, such as data management systems, to effectively service policyholders. RMA could have employed a contractor to service policyholders, but doing so could have been costly and may not have resulted in the timely payment of claims. Furthermore, according to RMA, they were unable to identify a company to contract with to service the policies and related claims. Thus, according to RMA, while RMA has the authority in the event of insolvency to service policyholders by taking control of companies’ policies, it is unprepared to act on this authority. RMA is further dependent on state regulators to make decisions that will allow the agency to act in the most efficient manner to protect policyholders and maintain stability in the federal crop insurance program. Prior to American Growers’ insolvency, RMA had not reached an agreement with NDOI that addressed RMA’s interests in the case of insolvency including the state’s financial responsibilities. RMA argues that while it does not have a written policy to address insolvencies, it does have flexibility to assess the situation when it occurs and use the most efficient way to ensure that policyholders do not face a service disruption. While the lack of a written policy and agreements may allow greater flexibility, the absence of specific framework may also result in state regulator decisions detrimental to RMA and the federal crop insurance program. A policy describing state and RMA authorities and responsibilities when a state decides to act against an insolvent company would provide RMA some assurance that the federal government’s interests are protected. The failure of American Growers, at the time, the largest participant in the federal crop insurance program was caused by the cumulative effect of company decisions over several years, and triggered by a drought that forced the company to severely deplete its surplus to cover operating expenses. Reviewing the causes underlying American Growers’ failure and RMA’s actions provides a valuable opportunity to identify shortcomings in the financial oversight of companies participating in the federal crop insurance program and reforms necessary to strengthen RMA’s oversight and RMA’s ability to respond to an insurance provider insolvency. The failure of American Growers demonstrates that companies relying on anticipated underwriting gains to cover operational expenses may face financial difficulties similar to American Growers. More specifically, it suggests that companies must find ways to achieve operating efficiencies so that their expenses do not exceed the administrative and operational expense reimbursement provided by RMA to cover expenses for the sale and service of federal crop insurance policies. Further, the failure of American Growers highlights the need to improve RMA’s financial oversight of companies participating in the federal crop insurance program. Clearly, RMA’s oversight procedures at the time of the failure were inadequate to ensure that companies met applicable financial requirements for participation in the program. Specifically, the failure of American Growers highlights the need for improved financial and operational reviews, and improved coordination with state insurance regulators. If adequate financial oversight procedures had been in place prior to the failure of American Growers, the company’s weakened financial condition may have been detected in time to allow for corrective actions and thereby reduced costs to taxpayers. While RMA has conducted additional oversight of companies and has initiated greater contact with state regulators after the failure of American Growers, RMA has not formalized these procedures. RMA responded to the failure of American Growers in an effective manner that ensured continued coverage for farmers and stability in the crop insurance program. Further, RMA demonstrated that the federal crop insurance program functioned as intended by ensuring that policyholders were protected. However, the failure of American Growers highlights the need for RMA to consider developing written policies to ensure that it takes the most effective and efficient actions in the event of future insolvencies in the federal crop insurance program. As demonstrated by the failure of American Growers, RMA is vulnerable to state insurance regulators’ actions when a company fails. State regulators are vested with the authority to determine what supervisory action to take in response to the financial failure of an insurance company. While NDOI accommodated RMA’s interests by allowing RMA to fund the operation of the company long enough to pay farmers’ claims, other actions available to the state, including liquidation, could have increased RMA’s costs or limited RMA’s flexibility in protecting policyholders. Better coordination with state regulators, regarding respective authorities and responsibilities in the event of future insurance provider insolvencies, is necessary to ensure that RMA’s interests are protected. To improve RMA’s financial oversight of companies participating in the federal crop insurance program and its ability to effectively address future insolvencies, we recommend that the Secretary of Agriculture direct RMA to take the following three actions: (1) Develop written policies to improve financial and operational reviews used to monitor the financial condition of companies to include analyses of projected expenses, projected underwriting gains, relevant financial operations of holding companies, and financial data on planned acquisitions. (2) Develop written agreements with state insurance regulators to improve coordination and cooperation in overseeing the financial condition of companies selling crop insurance, including the sharing of examination results and supporting work papers. (3) Develop a written policy clarifying RMA's authority as it relates to federal/state actions and responsibilities when a state regulator decides to place a company under supervision or rehabilitation, or to liquidate the company. We provided USDA with a draft of this report for its review and comment. We received written comments from the Administrator of USDA’s RMA. RMA agreed with our recommendations and stated that it is (1) formalizing the improvements in oversight that we recommended in the new SRA, (2) developing written agreements with state insurance regulators and the National Association of Insurance Commissioners (NAIC) to improve data sharing and oversight, and (3) clarifying RMA’s authority as it relates to federal/state actions when a state takes action against a crop insurance company in its draft SRA and in discussions with state regulators and the NAIC. When completed, RMA’s initiatives to implement the recommendations in this report will improve its ability to evaluate companies overall financial health and to earlier detect weaknesses in companies’ financial condition. However, to the extent that RMA cannot obtain enhanced disclosure and accountability through proposed changes to the SRA, it should implement our recommendation by modifying its regulations or other written policies. Finally, RMA’s increased cooperation and coordination with state insurance regulators will likely strengthen oversight by both federal and state regulators and facilitate problem resolution should a company fail in the future. RMA also provided technical corrections, which we have incorporated into the report as appropriate. RMA's written comments are presented in appendix IX. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of the report until 30 days from its issue date. At that time we will send copies of this report to appropriate congressional committees; the Secretary of Agriculture; the Director, Office of Management and Budget; and other interested parties. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. At the request of the Chairman and Ranking Minority Member of the House Committee on Agriculture and the Chairman and Ranking Minority member of the House Subcommittee on General Farm Commodities and Risk Management, we reviewed USDA’s actions regarding American Growers Insurance Company (American Growers) and their impact on the federal crop insurance program. Specifically, we agreed to determine (1) what key factors led to the failure of American Growers, (2) whether Risk Management Agency (RMA) procedures were adequate for monitoring crop insurance companies’ financial condition, and (3) how effectively and efficientlyRMA handled the dissolution of American Growers. In addition, we were asked to determine what factors led to RMA determinations affecting a proposed sale of American Growers’ assets to Rain and Hail LLC (Rain and Hail) and RMA’s decision to guarantee that all American Growers’ agent commissions be paid. Information related to the Rain and Hail proposal is provided in appendix VII. Information on USDA’s decisions to guarantee agent commissions is provided in appendix VIII. To determine the key factors leading to the failure of American Growers, we analyzed company documents and financial statements, including annual and quarterly statements for 1999 through 2002. We compared American Growers’ expense data with expense data for other companies participating in the program. For this analysis, we computed the average expense ratios of companies participating in the crop insurance program, excluding the expense data from American Growers. Due to the timing of American Growers’ failure, it did not submit an expense report to RMA for 2002. To capture the extent of the financial problems that American Growers experienced in 2002 in comparison with other companies, we worked closely with staff who remained at American Growers while it was in rehabilitation to create an expense report for 2002. We also interviewed American Growers’ management; the Nebraska Department of Insurance (NDOI) appointed rehabilitator for American Growers and other key staff; industry groups, such as the National Association of Insurance Commissioners (NAIC); and representatives from other crop insurance companies, including key Rain and Hail personnel, to gain an industry perspective on the failure of American Growers’ and RMA’s actions. We also contacted the National Association of Crop Insurance Agents; however, they did not grant our requests for an interview. To adjust for the general effects of inflation over time, we used the chain-weighted gross domestic product price index to express dollar amounts in inflation- adjusted 2003 dollars. To evaluate RMA’s oversight procedures we interviewed RMA staff in Washington, D.C. and Kansas City, Missouri, offices. We reviewed the guidance that RMA uses to monitor companies’ compliance with the federal crop insurance program, including relevant laws; the Code of Federal Regulations, title 7, part 400; and agency guidance, including RMA’s Crop Insurance Handbook for 2002 and the current Standard Reinsurance Agreement (SRA), to verify that monitoring procedures were met. We also reviewed RMA’s files relating to the oversight of American Growers and approval of its SRA. To determine the effectiveness of RMA’s dissolution of American Growers, we examined RMA’s decision-making process and the costs associated with running American Growers’ operations after its failure to ensure that federal crop insurance policies were serviced. We reviewed American Growers’ financial statements and other documents. We used semistructured interviews to obtain the views of the Nebraska state commissioner; American Growers’ management; representatives from other crop insurance companies, including key Rain and Hail personnel; RMA staff; NAIC officials; and, industry groups on the failure of American Growers and on issues related to RMA’s handling of the dissolution. Specifically, we obtained our information from the officials by asking 10 structured questions in a uniform order within an interview that included additional unstructured, probing follow-up questions that were interjected at the discretion of the interviewer. We also used structured interviews to obtain the views of insurance commissioners on the failure of American Growers and on issues related to sharing confidential business information with RMA. In this case, we asked an additional three structured questions and followed up with additional unstructured questions as needed. We selected insurance commissioners in 10 states where there was at least one 2004 SRA holder, according to RMA data. These states were Connecticut, Indiana, Illinois, Iowa, Kansas, Minnesota, New York, Ohio, Pennsylvania, and Texas. We met with RMA officials in February 2004 to discuss our findings and tentative recommendations. We conducted our review from July 2003 through May 2004 in accordance with generally accepted government auditing standards. As part of an overall strategy to increase the company’s market share of the crop insurance industry, in 1997, American Growers developed and marketed a crop insurance product—Crop Revenue Coverage Plus (CRC Plus)—that was a supplement to federally reinsured crop insurance, but it was not subsidized or reinsured by the federal government. The product was a supplement to Crop Revenue Coverage (CRC), an insurance product that protected farmers against crop loss and low crop prices in the event of a low price, a low yield, or any combination of the two. CRC Plus allowed farmers to obtain supplemental coverage for their crops, in essence providing a higher level of coverage in the event of losses. American Growers initially marketed CRC Plus in only two states and covered grain, corn, sorghum, and soybean crops. In 1999, when the company extended CRC Plus to rice, a crop with which American Growers had limited actuarial experience, the company mistakenly priced the product too low. It then promoted the product heavily and did not adequately anticipate the demand for the product. When it priced CRC Plus for rice, American Growers made a mathematical error—caused by the misplacement of a decimal point—that resulted in the insurance being sold for a lower price than it should have been. The low price for the policy, coupled with uncertainty in the market price of rice that year, resulted in a greater demand for the product than the company had anticipated. When American Growers realized that the demand for the product and associated losses would be greater than the company’s surplus could handle, especially considering its low price, American Growers announced it would no longer accept applications at the price originally listed, effectively withdrawing the product from the market. However, farmers had already made decisions about what crop insurance they would purchase, based upon their belief that they could obtain the new product offered by American Growers. The withdrawal of the product was untimely and made it difficult for some farmers to find adequate insurance. As a result, Congress acted to extend the filing deadline for other types of federally reinsured crop insurance so that farmers adversely affected by American Growers’ actions could obtain adequate insurance for their crops. Finally, some farmers sued American Growers, while RMA and six states examined American Growers’ actions. The litigation by farmers and regulatory actions resulted in more than $13 million in fines and settlements levied against American Growers in addition to losses of $6 million. The fines, costs from litigation, and increased service costs resulting from the new insurance product reduced American Growers’ surplus. As a result, American Growers’ surplus dropped from $76 million in 1998, to $60 million in 2000, a 21 percent decline over 2 years. This decline in American Growers’ surplus occurred at the same time the company increased the amount of insurance premium it wrote, from $271 million in 1998 to $307 million in 2000, an increase of 13 percent. To lessen the impact of losses associated with the CRC Plus policies, American Growers accepted a $20 million loan in the form of a surplus note from an affiliate company to strengthen its surplus. American Growers also acquired commercial reinsurance coverage to pay for losses related to CRC Plus. This reinsurance coverage committed the company to future payments of more than $60 million through 2006. American Growers’ reported that operating expenses were higher than the average reported expenses of other companies participating in the federal crop insurance program, primarily due to American Growers’ efforts to attract agents by paying them higher than average commissions and other actions designed to expand its business. From 2000 to 2002, average commissions for American Growers’ agents were 12 percent higher than commissions for agents working for other companies. American Growers paid commissions that averaged about $17 for each $100 premium it sold while other companies’ agent commissions averaged $15 for each $100 of premium. Agents are companies’ principal representatives to farmers. Farmers purchase crop insurance through agents who can write premium for any company selling crop insurance. Farmers generally develop relationships with specific agents and rely on agents for advice and service. Successful agents write more policies and may write policies with lower loss ratios. Agents typically receive as a commission a percentage of every dollar of premium in crop insurance sold to farmers. Some agents choose to write policies for certain companies based on commissions paid them by the company and on how well the company services the agents’ clients. Higher commission rates are not the only factor attracting an agent to a company, but rates do play an important role. In an effort to increase its market share by recruiting more agents to sell crop insurance, American Growers paid higher agent commissions than other companies participating in the program. American Growers also funded some expenses not directly related to the sale and service of federally funded crop insurance, such as trips to resort locations. These expenses, among others, created operating costs that were greater than the average operating expenses of other companies in the industry. Overall, American Growers’ expenses, as a percentage of premium sold, were about 11 percent higher than the average expenses of the other companies. In other words, American Growers had expenses of about $30 for every $100 of premium it sold while other companies had expenses of about $27 for every $100 of premium sold. Salaries at American Growers averaged 15 percent higher than at other companies. In addition, American Growers spent twice the rate as other companies on advertising; and American Growers’ expenses for equipment, including computer equipment, was twice that of other companies. In addition to the fact that American Growers’ expenses, as a percent of premium sold, were higher than those of other companies, American Growers’ expenses were also higher than the amount of RMA’s reimbursement to the company. RMA provides companies a reimbursement to cover their expenses related to the sale and service of crop insurance. This reimbursement is a preestablished percentage of premiums to reimburse companies for the expenses associated with selling and servicing federal crop insurance. The reimbursement rate is set at a level to cover the companies’ costs to sell and service crop insurance policies. These costs include agent commissions, staff and office expenses required to process policies and claims, and loss adjusting expenses. In 1998, Congress reduced the amount of reimbursement from a cap of 27 cents per dollar of premium a company sells to 24.5 cents per dollar of premium. This reduction occurred after our 1997 report revealed that companies were basing their request for higher reimbursement rates on numerous expenses that were not directly related to the sale and service of crop insurance, such as trips to resorts, noncompete clauses associated with company mergers, and company profit-sharing arrangements. Under the current reimbursement arrangement, companies have no obligation to spend their payment on expenses related to crop insurance; they may spend the payment in any way they choose. We found that American Growers spent more than its reimbursement by paying above average-rates for agent commissions, marketing efforts, and other items not directly related to the sale and service of federal crop policies, such as tickets to sporting events and trips to resorts for agents. On June 6, 2001, Acceptance Insurance Companies Inc., (Acceptance) and its subsidiaries, including American Growers, acquired the crop insurance business of IGF Insurance Company (IGF) from Symons International Group, Inc. Acceptance and its subsidiaries raised funds for this purchase by selling most of its noncrop insurance subsidiaries between September 1999 and July 2001, as part of a larger business strategy to focus on and expand American Growers’ crop insurance business. American Growers, through its parent corporation Acceptance, acquired most of IGF’s book of crop insurance policies, in addition to obtaining leased office space, company cars, and related staff to service these policies. A senior manager at American Growers said that the company’s strategy was to achieve operational efficiencies by combining the operations of the two companies. However, he said that this goal was not achieved as quickly as the company had planned. For example, American Growers had planned on combining the companies’ two computer systems; but it was unable to successfully do so, requiring it to keep two staffs of information technology specialists. After the acquisition, American Growers grew from the company with the third largest volume of premium sold to being the largest. However, this growth also came with higher costs. American Growers’ expenses increased 63 percent, from 2000 to 2001, the years before and after the purchase of IGF. In 2000, American Growers had about $117 million in expenses, but its expenses increased to $191 million in 2001. While the amount of premium American Growers wrote increased, from about $291 million in 2000 to $450 million in 2001, a 54 percent increase, the amount of surplus the company kept only increased from $57 million in 2000 to $75 million in 2001, a 31 percent increase. In 2002, American Growers wrote nearly $632 million in premiums, but without adding to the $75 million reserve. American Growers’ high expenses led them to spend more than RMA was reimbursing it for the sale and service of crop insurance. In 2001, for every $100 RMA provided American Growers to sell and service crop insurance, the company was spending $130. To pay for its expenses in excess of RMA’s reimbursement, American Growers planned on making underwriting profits from the sale of crop insurance. When setting its budget for 2002, American Growers predicted it would receive an 18 percent underwriting gain from policies it serviced under the federally reinsured crop program. However, American Growers’ 10-year history of underwriting gains in the program was only 16 percent. American Growers based its 2002 budget on achieving over $86 million in underwriting gains that year. The company’s profit projections were based, in part, on retaining a higher percentage of the risk for the policies it sold than in past years. By retaining a higher percentage of the risk on policies, American Growers could increase its profits if claims were low. Conversely, the company increased its exposure to loss if claims were high. However, widespread drought impacted the company’s ability to achieve these gains. In June 2002, more than one-third of the contiguous U.S. was in severe to extreme drought. Total losses for the crop insurance program increased 33 percent from 2001. In 2001, total losses to the program were over $3 billion. In 2002, total losses increased to over $4 billion. For the category of policies for which American Growers retained a higher level of risk, the loss ratio in 2002 was about 40 percent higher than in 2001, resulting in the payment of $114 in claims for every $100 it received in premiums for those policies. When the underwriting gains American Growers had predicted did not materialize, losses and expenses depleted the company’s surplus. As a result, NDOI, which regulates insurance companies domiciled in that state, declared that the company was operating in a hazardous financial condition and placed the company in supervision, and later rehabilitation. On November 22, 2002, NDOI took steps to protect American Growers’ policyholders by issuing a state order of supervision. NDOI ordered the supervision because the company’s surplus declined from about $75 million for the year ending December 31, 2001, to about $11 million as of September 2002. According to the order, the decline in American Growers’ surplus—in excess of 50 percent within a 9-month period—rendered the company financially hazardous to the public and its policyholders. Under the order of supervision, American Growers could not sell any new insurance policies or conduct business beyond those that are routine in the day-to-day operations of its business, without the approval of the supervisor appointed by NDOI. On December 20, 2002, NDOI obtained a court order that placed American Growers into rehabilitation under the auspices of NDOI. Under rehabilitation, NDOI appointed a rehabilitator who took control of American Growers to oversee the orderly termination of the company’s business and to allow for an orderly transfer of policies to other companies. The NDOI-appointed rehabilitator assumed the responsibilities of the board of directors and officers and took control of the day-to-day management of the company. RMA worked in conjunction with NDOI and remaining American Growers’ staff to ensure that claims were paid (see table 4). The claims that were filed resulted from policyholder losses from the 1999 through 2003 crop seasons—primarily the 2002 crop season. After NDOI took control of American Growers, the company had about $35 million in cash. These funds were used, in part, to pay American Growers’ staff and support staff operating under the auspices of NDOI to pay policyholder claims. When American Growers’ cash reserves were reduced to $10 million, RMA reimbursed NDOI for additional costs of $40.5 million to operate the company. When RMA began reimbursing NDOI in February 2003, the vast majority of policyholder claims had been paid (see Fig. 1). About $317 million, or 77 percent, of the approximately $410 million in claims were paid by the end of January 2003. According to an RMA official, while the costs of reimbursing American Growers’ operations may appear excessive, relative to the amount of claims paid, the claims that had been paid before February 2003, were those that could be expeditiously handled. The claims that remained to be paid—beginning in February 2003—were those that required follow-up to determine the accuracy of reported information, were difficult to process due to missing information, or had other problems. Additionally, although claims had been paid and policies transferred, staff were still needed to process the transfer of policy-related paperwork to other companies and resolve lingering issues, such as claims with missing information. Prior to NDOI’s declaration of its hazardous financial condition, American Growers was working to strengthen its financial condition by selling its insurance business to another insurance provider. In September 2002, as losses associated with that year’s extensive drought began to materialize, American Growers realized that the company’s operating expenses and crop losses were outpacing its income and surplus and advised NDOI and RMA accordingly. To improve its financial condition, American Growers attempted to sell its crop insurance business to another insurance company. On November 18, 2002, American Growers’ parent company, Acceptance, signed a nonbinding letter of intent setting forth preliminary terms for the company to sell portions of its crop insurance business to Rain and Hail LLC (Rain and Hail) for over $20 million pending regulatory approval. Rain and Hail asked RMA for authority to transfer American Growers’ policies without having to cancel each policy and rewrite them under its own name—a concession that would have facilitated the bulk transfer of the policies. In the past, RMA had allowed this type of transfer only if the acquiring company agreed to (1) accept all the policies previously underwritten by the company being purchased and (2) assume all past liability for those policies. According to RMA, Rain and Hail did not want to assume any past liabilities for the policies and wanted to retain the right to select agents and policyholders with whom it wished to contract. According to RMA, Rain and Hail’s intention was to not accept past liabilities regarding disputed claims, compliance issues, litigation or regulatory issues associated with American Growers’ policies and ultimately to acquire only about one-third of American Growers’ business. In a letter dated November 25, 2002, RMA rejected Rain and Hail’s request for exemptions from RMA rules regarding the bulk transfer of policies. The agency was concerned that waiving the existing rules regarding potential liabilities and future policy placement would not protect the interests of policyholders and taxpayers or the integrity of the federal crop insurance program. RMA was concerned that if it approved the sale of American Growers’ policies to Rain and Hail, it could have left a significant number of policyholders without insurance. It also may have left a disproportionate number of poor performing policies for other insurance providers to assume. Since reinsured companies are required to accept all policyholders that apply for insurance regardless of their loss history, RMA was concerned that its decision would be unfair to other insurance providers and that any future denial of similar exemptions to other companies would be challenged as arbitrary and capricious. As a result, RMA informed Rain and Hail that it could not grant the exemptions it requested. Accordingly, Rain and Hail announced that it was withdrawing its offer to purchase American Growers’ business. When we discussed this issue with Rain and Hail, it concurred that its company was unwilling to accept the past liabilities associated with American Growers’ policies, but denied it was not willing to accept all of American Growers’ policyholders. Senior managers at Rain and Hail said their company was unwilling to accept the past liabilities associated with American Growers’ policies because they did not have adequate time to assess the extent of any such liabilities and the financial implications for Rain and Hail. However, these managers said that Rain and Hail was willing to accept any farmer who wanted a policy from the company, but they stated that the company wanted to retain the right to select which agents it would use to sell and service crop insurance policies. Whether the sale of American Growers’ policies to Rain and Hail could have saved taxpayers all or some of the costs of the dissolution if the proposed sale had been completed is unclear. A Rain and Hail representative stated that the sale would have provided a cash infusion that could have prevented the failure of American Growers. An Acceptance representative stated that the sale might have allowed American Growers to pay remaining claims without having to come under control of NDOI. However, depending on the details, even with the cash infusion from the sale of assets to Rain and Hail, the company may still have been found to be in a financially hazardous condition. After consultation with NDOI, RMA agreed to pay American Growers’ agent commissions in full, despite the fact that they were paid higher than industry averages. RMA believed several factors, any one of which could have resulted in the disruption of policyholders’ coverage, warranted paying agent commissions in full. First, RMA agreed to pay agent commissions in full, in part, because NDOI’s position was that as long as American Growers was under the rehabilitation order instead of in liquidation, the company’s contracts were valid, enforceable legal obligations that had to be paid. Second, RMA was concerned that some agents may have refused to continue to service policyholders if they knew they would not get paid for their work, and RMA needed agents’ cooperation in ensuring the timely collection of premiums and transfer of policies to other crop insurance companies. Third, RMA was concerned that some agents, particularly small agents, could go out of business if not paid their commissions and would therefore be unable to service claims or transfer policies. Finally, RMA was concerned that some agents may have deducted their commissions from policyholder premiums, which could have made it more difficult for RMA to determine which policyholders had paid the premiums on their policies. While RMA could have potentially achieved cost savings of about $800,000 by not paying some of American Growers’ agents’ commissions—the portion of their $7.6 million in commissions that exceeded industry averages—agents’ response to such a decision could have also disrupted service to policyholders and caused RMA to incur additional costs. Industry opinion varied on whether RMA should have paid agent commissions in full. According to the former chief executive officer of American Growers, high commissions paid to agents contributed to American Growers’ and other companies’ financial troubles. One company executive expressed concerns that RMA’s actions might make it more difficult for companies that are holding the line on agent commissions to continue to hold commissions at a reasonable level. Another representative was concerned that agents were going to work for the company that paid the highest commissions, regardless of the company’s financial health, because RMA had shown that agents would receive their commission regardless of the company’s status. However, one crop insurance company representative was concerned about the consequences of not paying agent commissions, particularly since the agents were not directly responsible for the company’s failure. Representatives also stated that RMA was correct in paying agent commissions to ensure agent cooperation, to not drive smaller agents into bankruptcy, and to maintain the integrity of the federal crop insurance program. Finally, RMA’s actions in paying full agent commissions could have implications for the future of the federal crop insurance program, but it is unclear how future company and agent practices may be affected by RMA’s decisions. RMA’s actions could suggest that it might provide similar financial support in the event of future insolvencies, regardless of company and agent practices. For example, RMA’s actions could have set a precedent for high agent commissions, a key factor in the failure of American Growers, which could, in turn, be a factor in other insolvencies. However, RMA has stated that it plans to consider each new situation on a case-by- case basis and that agents and companies should not expect the same treatment as in the case of American Growers. RMA said that a managing general agent had recently gone out of business and that RMA had not stepped in to provide relief to agents. The following are GAO’s comments on the Risk Management Agency’s letter dated April 28, 2004. 1. Per RMA’s suggestion, we have provided additional details in this report noting that NDOI placed American Growers under supervision on November 22, 2002, and later placed the company under rehabilitation on December 20, 2002. RMA suggests that the state’s initial action impacted its flexibility in working with the state and the company. As we note in our conclusions, better coordination with state regulators regarding respective authorities and responsibilities in the event of future insurance provider insolvencies is necessary to ensure that RMA’s interests are protected. 2. We revised the report to note that some agents are paid a salary rather than receiving commissions on the premiums from policies sold. American Growers’ agents received commissions, as do most agents who sell and service crop insurance. 3. At the time of our review, we noted written procedures based on regulations for the yearly review and approval of SRA holders and applicants. However, as noted in this report, these procedures were insufficient to assess the overall financial health of a company. To the extent that the final SRA does not fully address oversight weaknesses identified in our report, RMA should take action to modify its regulations or other written policies. 4. RMA on-site financial and operational reviews do not appear to focus on the overall financial health of a company, but rather on internal controls. However, as a minimum, RMA should coordinate these reviews with state regulators who periodically review company operations. In addition to the individuals named above, David W. Bennett, John W. Delicath, Tyra DiPalma-Vigil, Jean McSween, and Bruce Skud made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
U.S. Department of Agriculture's (USDA) Risk Management Agency (RMA) administers the federal crop insurance program in partnership with insurance companies who share in the risk of loss or gain. In 2002, American Growers Insurance Company (American Growers), at the time, the largest participant in the program, was placed under regulatory control by the state of Nebraska. To ensure that policyholders were protected and that farmers' claims were paid, RMA agreed to fund the dissolution of American Growers. To date, RMA has spent about $40 million. GAO was asked to determine (1) what factors led to the failure of American Growers, (2) whether RMA procedures were adequate to monitor companies' financial condition, and (3) how effectively and efficiently RMA handled the dissolution of American Growers. The failure of American Growers was caused by the cumulative effect of company decisions that reduced the company's surplus, making it vulnerable to collapse when widespread drought in 2002 erased anticipated profits. The company's decisions were part of an overall strategy to increase the scope and size of American Growers' crop insurance business. However, when anticipated profits did not cover the company's high operating expenses and dropped its surplus below statutory minimums, Nebraska's Department of Insurance (NDOI) declared the company to be in a hazardous financial condition prompting the state commissioner to take control of the company. In 2002, RMA's oversight was inadequate to evaluate the overall financial condition of companies selling federal crop insurance. Although RMA reviewed companies' plans for selling crop insurance and analyzed selected financial data, oversight procedures generally focused on financial data 6 to 18 months old and were insufficient to assess the overall financial health of the company. Additionally, RMA did not routinely share information or otherwise coordinate with state regulators on the financial condition of companies participating in the crop insurance program. For example, NDOI had identified financial and management weaknesses at American Growers. Since American Growers' failure, RMA has acted to strengthen its oversight procedures by requiring additional information on companies' planned financial operations. It is also working to improve its coordination with state insurance regulators. However, as we completed our review, neither of these initiatives had been included in written agency policies. When American Growers failed, RMA effectively protected the company's policyholders, but lacked a policy to ensure it handled the insolvency efficiently. RMA has spent over $40 million, working with the state of Nebraska, to protect policyholders by ensuring that policies were transferred to other companies and that farmers' claims were paid. NDOI accommodated RMA's interests by allowing RMA to fund the operation of the company long enough to pay farmers' claims. Prior to American Growers' failure, RMA did not have an agreement with the NDOI commissioner defining state and federal financial roles and responsibilities. If the NDOI commissioner had decided to liquidate the company, RMA may have incurred more costs and had less flexibility in protecting policyholders.
Three agencies share responsibility for enforcing ERISA: the Department of Labor (EBSA), the Department of the Treasury’s Internal Revenue Service (IRS), and the Pension Benefit Guaranty Corporation (PBGC). EBSA enforces fiduciary standards for plan fiduciaries of privately sponsored employee benefit plans to ensure that plans are operated in the best interests of plan participants. EBSA also enforces reporting and disclosure requirements covering the type and extent of information provided to the federal government and plan participants, and seeks to ensure that specific transactions prohibited by ERISA are not conducted by plans. Under Title I of ERISA, EBSA conducts investigations of plans and seeks appropriate remedies to correct violations of the law, including litigation when necessary. IRS enforces the Internal Revenue Code (IRC) and provisions that must be met which give pension plans tax-qualified status, including participation, vesting, and funding requirements. The IRS also audits plans to ensure compliance and can levy tax penalties or revoke the tax-qualified status of a plan as appropriate. PBGC, under Title IV of ERISA, provides insurance for participants and beneficiaries of certain types of tax-qualified pension plans, called defined benefit plans, that terminate with insufficient assets to pay promised benefits. Recent terminations of large, underfunded plans have threatened the long-term solvency of PBGC. As a result, we placed PBGC’s single-employer insurance program on our high-risk list of programs needing further attention and congressional action. ERISA and the IRC require plan administrators to file annual reports concerning, among other things, the financial condition and operation of plans. EBSA, IRS, and PBGC jointly developed the Form 5500 so that plan administrators can satisfy this annual reporting requirement. Additionally, ERISA and the IRC provide for the assessment or imposition of penalties for plan sponsors not submitting the required information when due. About one-fifth of Americans’ retirement wealth is invested in mutual funds, which are regulated by the Securities and Exchange Commission (SEC), primarily under the Investment Company Act of 1940. The primary mission of the SEC is to protect investors, including pension plan participants investing in securities markets, and maintain the integrity of the securities markets through extensive disclosure, enforcement, and education. In addition, some pension plans use investment managers to oversee plan assets, and these managers may be subject to other securities laws. EBSA’s enforcement strategy is a multifaceted approach of targeted plan investigations supplemented by providing education to plan participants and plan sponsors. EBSA allows its regions the flexibility to tailor their investigations to address the unique issues in their regions, within a framework established by EBSA’s Office of Enforcement. The regional offices then have a significant degree of autonomy in developing and carrying out investigations using a mixture of approaches and techniques they deem most appropriate. Participant leads are still the major source of investigations. To supplement their investigations, the regions conduct outreach activities to educate both plan participants and sponsors. The purpose of these efforts is to gain participants’ help in identifying potential violations and to educate sponsors in properly managing their plans and avoiding violations. The regions also process applications for the Voluntary Fiduciary Correction Program (VFCP) through which plan officials can voluntarily report and correct some violations without penalty. EBSA attempts to maximize the effectiveness of its enforcement efforts to detect and correct ERISA violations by targeting specific cases for review. In doing so, the Office of Enforcement provides assistance to the regional offices in the form of broad program policy guidance, program oversight, and technical support. The regional offices then focus their investigative workloads to address the needs specific to their region. Investigative staff also have some responsibility for selecting cases. The Office of Enforcement identifies national priorities—areas critical to the well-being of employee benefit plan participants and beneficiaries nationwide—in which all regions must target a portion of their investigative efforts. Currently, EBSA’s national priorities involve, among other things, investigating defined contribution pension plan and health plan fraud. Officials in the Office of Enforcement said that national priorities are periodically re-evaluated and are changed to reflect trends in the area of pensions and other benefits. On the basis of its national investigative priorities, the Office of Enforcement has established a number of national projects. Currently, there are five national projects pertaining to a variety of issues including employee contributions to defined contribution plans, employee stock ownership plans (ESOP), and health plan fraud. EBSA’s increasing emphasis on defined contribution pension plans reflects the rapid growth of this segment of the pension plan universe. In fiscal year 2004, EBSA had monetary results of over $31 million and obtained 10 criminal indictments under its employee contributions project. EBSA’s most recent national enforcement project involves investigating violations pertaining to ESOPs, such as the incorrect valuation of employer securities and the failure to provide participants with the specific benefits required or allowed under ESOPs, such as voting rights, the ability to diversify their account balances at certain times, and the right to sell their shares of stock. Likewise, more attention is being given to health plan fraud, such as fraudulent multiple employer welfare arrangements (MEWAs). In this instance, EBSA’s emphasis is on abusive and fraudulent MEWAs created by promoters that attempt to evade state insurance regulations and sell the promise of inexpensive health benefit insurance but typically default on their benefit obligations. EBSA regional offices determine the focus of their investigative workloads based on their evaluation of the employee benefit plans in their jurisdiction and guidance from the Office of Enforcement. For example, each region is expected to conduct investigations that cover their entire geographic jurisdiction and attain a balance among the different types and sizes of plans investigated. In addition, each regional office is expected to dedicate some percentage of its staff resources to national and to regional projects—those developed within their own region that focus on local concerns. In developing regional projects, each regional office uses its knowledge of the unique activities and types of plans in its jurisdiction. For example, a region that has a heavy banking industry concentration may develop a project aimed at a particular type of transaction commonly performed by banks. We previously reported that the regional offices spend an average of about 40 percent of their investigative time conducting investigations in support of national projects and almost 25 percentage of their investigative time on regional projects. EBSA officials said that their most effective source of leads on violations of ERISA is from complaints from plan participants. Case openings also originate from news articles or other publications on a particular industry or company as well as tips from colleagues in other enforcement agencies. Computer searches and targeting of Form 5500 information on specific types of plans account for only 25 percent of case openings. In 1994, we reported that EBSA had done little to test the effectiveness of the computerized targeting runs it was using to select cases. Since then, EBSA has scaled down both the number of computerized runs available to staff and its reliance on these runs as a means of selecting cases. Investigative staff are also responsible for identifying a portion of their cases on their own to complete their workloads and address other potentially vulnerable areas. As shown in figure 1, EBSA’s investigative process generally follows a pattern of selecting, developing, resolving, and reviewing cases. EBSA officials told us that they open about 4,000 investigations into actual and potential violations of ERISA annually. According to EBSA, its primary goal in resolving a case is to ensure that a plan’s assets, and therefore its participants and beneficiaries, are protected. EBSA’s decision to litigate a case is made jointly with the Department of Labor’s Regional Solicitors’ Offices. Although EBSA settles most cases without going to court, both the agency and the Solicitor’s Office recognize the need to litigate some cases for their deterrent effect on other providers. As part of its enforcement program, EBSA also detects and investigates criminal violations of ERISA. From fiscal years 2000 through 2004, criminal investigations resulted in an average of 54 cases closed with convictions or guilty pleas annually. Part of EBSA’s enforcement strategy includes routinely publicizing the results of its litigation efforts in both the civil and criminal areas as a deterrent factor. To further leverage its enforcement resources, EBSA provides education to plan participants, sponsors, and service providers and allows the voluntary self-correction of certain transactions without penalty. EBSA’s education program for plan participants aims to increase their knowledge of their rights and benefits under ERISA. For example, EBSA anticipates that educating participants will establish an environment in which individuals can help protect their own benefits by recognizing potential problems and notifying EBSA when issues arise. The agency also conducts outreach to plan sponsors and service providers about their ongoing fiduciary responsibilities and obligations under ERISA. At the national level, EBSA’s Office of Participant Assistance develops, implements, and evaluates agency-wide participant assistance and outreach programs. It also provides policies and guidance to other EBSA national and regional offices involved in outreach activities. EBSA’s nationwide education campaigns include a fiduciary education campaign, launched in May 2004, to educate plan sponsors and service providers about their fiduciary responsibilities under ERISA. This campaign also includes educational material on understanding fees and selecting an auditor. EBSA’s regional offices also assist in implementing national education initiatives and conduct their own outreach to address local concerns. The regional offices’ benefit advisers provide written and telephone responses to participants. Benefit advisers and investigative staff also speak at conferences and seminars sponsored by trade and professional groups and participate in outreach and educational efforts in conjunction with other federal or state agencies. At the national level, several EBSA offices direct specialized outreach activities. As with EBSA’s participant-directed outreach activities, its efforts to educate plan sponsors and service providers also rely upon Office of Enforcement staff and the regional offices for implementation. For example, these staff make presentations to employer groups and service provider organizations about their ERISA obligations and any new requirements under the law, such as reporting and disclosure provisions. To supplement its investigative programs, EBSA is promoting the self- disclosure and self-correction of possible ERISA violations by plan officials through its Voluntary Fiduciary Correction Program. The purpose of the VFCP is to protect the financial security of workers by encouraging plan officials to identify and correct ERISA violations on their own. Specifically, the VFCP allows plan officials to identify and correct 18 transactions, such as delinquent participant contributions and participant loan repayments to pension plans. Under the VFCP, plan officials follow a process whereby they (1) correct the violation using EBSA’s written guidance; (2) restore any losses or profits to the plan; (3) notify participants and beneficiaries of the correction; and (4) file a VFCP application, which includes evidence of the corrected transaction, with the EBSA regional office in whose jurisdiction it resides. If the regional office determines that the plan has met the program’s terms, it will issue a “no action” letter to the applicant and will not initiate a civil investigation of the violation, which could have resulted in a penalty being assessed against the plan. EBSA has taken steps to address many of the recommendations we have made over a number of years to improve its enforcement program, including assessing the level and types of noncompliance with ERISA, improving sharing of best investigative practices, and developing a human capital strategy to better respond changes in its workforce. EBSA reported a significant increase in enforcement results for fiscal year 2004, including $3.1 billion in total monetary results and closing nearly 4,400 investigations, with nearly 70 percent of those cases resulting in corrections of ERISA violations. Despite this progress, EBSA continues to face a number of significant challenges to its enforcement program, including the lack of timely and reliable plan information, restrictive statutory requirements that limit its ability to assess certain penalties, and the need to better coordinate enforcement strategies with the SEC. EBSA has taken a number of steps, including addressing recommendations from our prior reports that have improved its enforcement efforts across a number of areas. For example, EBSA has continued to refine its enforcement strategy to meet changing priorities and provided additional flexibility to its regional office to target areas of investigations. More recently, EBSA implemented a series of recommendations from our 2002 enforcement report that helped it strategically manage its enforcement program, including conducting studies to determine the level of and type of noncompliance with ERISA and developing a Human Capital Strategic Management Plan (see table 1). EBSA has reported a substantial increase in results from its enforcement efforts since our last review. For fiscal year 2004, EBSA closed 4,399 civil investigations and reported $3.1 billion in total results, including $2.53 billion in prohibited transactions corrected and plan assets protected, up from $566 million in fiscal year 2002. Likewise, the percentage of civil investigations closed with results rose from 58 percent to 69 percent. Also, applications received for the VFCP increased from 55 in fiscal year 2002 to 474 in 2004. EBSA has been able to achieve such results with relatively small recent increases in staff. Full-time equivalent (FTE) authorized staff levels increased from 850 in fiscal year 2001 to 887 FTEs in fiscal year 2005. The President’s budget for fiscal year 2006 requests no additional FTEs. Previously, we and others have reported that ERISA enforcement was hindered by incomplete, inaccurate, and untimely plan data. We recently reported that the lack of timely and complete of Form 5500 data affects EBSA’s use of the information for enforcement purposes, such as computer targeting and identifying troubled plans. EBSA uses Form 5500 information as a compliance tool to identify actual and potential violations of ERISA. Although EBSA has access to Form 5500 information sooner than the general public, the agency is affected by the statutory filing deadlines, which can be up to 285 days after plan year end, and long processing times for paper filings submitted to the ERISA Filing Acceptance System. EBSA receives processed Form 5500 information on individual filings on a regular basis once a form is completely processed. However, agency officials told us that as they still have to wait for a sufficiently complete universe of plan filings from any given plan year to be processed in order to begin their compliance targeting programs. As a result, EBSA officials told us that they are currently using plan year 2002 and 2003 Form 5500 information for computer targeting. They also said that in some cases untimely Form 5500 information affects their ability to identify financially troubled plans whose sponsors may be on the verge of going out of business and abandoning their pension plans, because these plans may no longer exist by the time that Labor receives the processed filing or is able to determine that no Form 5500 was filed by those sponsors. The Form 5500 also lacks key information that could better assist EBSA, IRS, and PBGC in monitoring plans and ensuring that they are in compliance with ERISA. EBSA, IRS and PBGC officials said that they have experienced difficulties when relying on Form 5500 information to identify and track all plans across years. Although EBSA has a process in place to identify and track plans filing a Form 5500 from year to year, problems still arise when plans change employer identification numbers (EIN) and/or plan numbers. Identifying plans is further complicated when plan sponsors are acquired, sold, or merged. In these cases, agency officials said that there is an increased possibility of mismatching of EINs, plans, and their identifying information. As result, EBSA officials said they are unable to (1) verify if all required employers are meeting the statutory requirement to file a Form 5500 annually, (2) identity all late filers, and (3) assess and collect penalties from all plans that fail to file or are late. Likewise, PBGC officials said that must spend additional time each year trying to identify and track certain defined benefit plans so that they can conduct compliance and research activities. EBSA officials said they are considering measures to better track and identify plans but have not reached any conclusions. Our recent report makes a number of recommendations aimed at improving the timeliness and content of Form 5500 that will likely assist EBSA’s enforcement efforts. In addition to problems with Form 5500 information, concerns remain about the quality of annual audits of plans’ financial statements by independent public accountants. For many years, we, as well as the Department of Labor’s Office of Inspector General (OIG), have reported that a significant number of these audits have not met ERISA requirements. For example, in 1992 we found that over a third of the 25 plan audits we reviewed had audit weaknesses so serious that their reliability and usefulness were questionable. We recommended that the Congress amend ERISA to require full-scope audits of employee benefit plans and to require plan administrators and independent public accountants to report on how effective an employee benefit plan’s internal controls are in protecting plan assets. Although such changes were subsequently proposed, they were not enacted. In 2004, Labor’s OIG reported that although EBSA had reviewed a significant number of employee benefit plan audits and made efforts to correct substandard audits, a significant number of substandard audits remain uncorrected. Furthermore, plan auditors performing substandard work generally continue to audit employee benefit plans without being required to improve the quality of the audits. As a result, these audits have not provided participants and beneficiaries the protections envisioned by Congress. Labor’s OIG recommended, among other things, that EBSA propose changes to ERISA so that EBSA has greater enforcement authority over employee benefit plan auditors. As we have previously reported, restrictive legal requirements have limited EBSA’s ability to assess penalties against fiduciaries or other persons who knowingly participate in a fiduciary breach. Unlike the SEC, which has the authority to impose a penalty without first assessing and then securing monetary damages, EBSA does not have such statutory authority and must assess penalties based on damages or, more specifically, the restoration of plan assets. Under Section 502(l), ERISA provides for a mandatory penalty against (1) a fiduciary who breaches a fiduciary duty under, or commits a violation of, Part 4 of Title I of ERISA or (2) against any other person who knowingly participates in such a breach or violation. This penalty is equal to 20 percent of the “applicable recovery amount,” or any settlement agreed upon by the Secretary or ordered by a court to be paid in a judicial proceeding instituted by the Secretary. However, the applicable recovery amount cannot be determined if damages have not been valued. This penalty can be assessed only against fiduciaries or knowing participants in a breach who, by court order or settlement agreement, restore plan assets. Therefore, if (1) there is no settlement agreement or court order or (2) someone other than a fiduciary or knowing participant returns plan assets, the penalty may not be assessed. For example, last year we reported that ERISA presented legal challenges when developing cases related to proxy voting by plan fiduciaries, particularly with regards to valuing monetary damages. As a result, because EBSA has never found a violation that resulted in monetary damages, it has never assessed a penalty or removed a fiduciary because of a proxy voting investigation. Given the restrictive legal requirements that have limited the use of penalties for violations of ERISA’s fiduciary requirements, we recommended that Congress consider amending ERISA to give the Secretary of Labor additional authority with respect to assessing monetary penalties against fiduciaries. We also recommended other changes to ERISA to better protect plan participants and increase the transparency of proxy voting practices by plan fiduciaries. Recent events such as the abusive trading practices of late trading and market timing in mutual funds and new revelations of conflicts of interest by pension consultants highlight the need for EBSA to better coordinate enforcement strategies with SEC. Last year we reported that SEC and EBSA had separately taken steps to address abusive trading practices in mutual funds. At the time we issued our report, SEC had taken a number of actions to address the abuses including: charging some fund companies with defrauding investors by not enforcing their stated policies on market timing, fining some institutions hundreds of millions of dollars (some of this money was to be returned to long-term shareholders who lost money due to abusive practices), permanently barring some individuals from future work with investment companies, and proposing new regulations addressing late trading and market timing. Separate from SEC activities, EBSA began investigating possible fiduciary violations at some large investment companies, including those that sponsor mutual funds, and violations by plan fiduciaries. EBSA also issued guidance suggesting that plan fiduciaries review their relationships with mutual funds and other investment companies to ensure they are meeting their responsibilities of acting reasonably, prudently, and solely in the interest of plan participants. Although SEC’s proposed regulations on late trading and market timing could have more adversely affected some plan participants than other mutual fund investors, EBSA was not involved in drafting the regulations because it does not regulate mutual funds. In another example of how EBSA and SEC enforcement responsibilities can intersect, SEC recently found that potential conflicts of interest may affect the objectivity of advice pension consultants are providing to their pension plan clients. The report also raised important issues for plan fiduciaries who often rely on the advice of pension consultants in operating their plans. Recently, EBSA and SEC issued tips to help plan fiduciaries evaluate the objectivity of advice and recommendations provided by pension consultants. Americans face numerous challenges to securing their economic security in retirement, including the long-term fiscal challenges facing Social Security; the uncertainty of promised pension benefits; and the potential volatility of the investments held in their defined contributions plans. Given these concerns, it is important that employees’ benefits are adequately protected. EBSA is a relatively small agency facing the daunting challenge of protecting over $4 trillion in assets of pension and welfare benefits for millions of Americans. Over the years, EBSA has taken steps to strengthen its enforcement program and leverage its limited resources. These actions have helped better position EBSA to more effectively enforce ERISA. EBSA, however, continues to face a number of significant challenges to its enforcement program. Foremost, despite improvements in the timeliness and content of the Form 5500, information currently collected does not permit EBSA and the other ERISA regulatory agencies to be in the best position to ensure compliance with federal laws and assess the financial condition of private pension plans. Given the ever-changing complexities of employee benefit plans and how rapidly the financial condition of pension plans can deteriorate, it is imperative that policymakers, regulators, plan participants, and others have more timely and accurate Form 5500 information. In addition, there is a legitimate question as to whether information currently collected on the Form 5500 can be used as an effective enforcement tool by EBSA or whether different information might be needed. Without the right information on plans in a timely manner, EBSA will continue to have to rely on participant complaints as a primary source of investigations rather than being able to proactively identify and target problems areas. Second, in some instances, EBSA’s enforcement efforts continue to be hindered by ERISA, the very law it is charged with enforcing. For example, because of restrictive legal requirements, EBSA continues to be hindered in assessing penalties against fiduciaries or others who knowingly participate in a fiduciary breach. Congress may want to amend ERISA to address such limits on EBSA’s enforcement authority. Finally, the significant changes that have occurred in pension plans, the growing complexity of financial transactions of such plans, and the increasing role of mutual funds and other investment vehicles in retirement savings plans require enhanced coordination of enforcement efforts with SEC. Furthermore, such changes raise the fundamental question of whether Congress should modify the current ERISA enforcement framework. For example, it is important to consider whether the current division of oversight responsibilities across several agencies is the best way to ensure effective enforcement or whether some type of consolidation or reallocation of responsibilities and resources could result in more effective and efficient ERISA enforcement. We look forward to working with Congress on such crucial issues. Mr. Chairman, this concludes my statement. I would be happy to respond to any questions you or other members of the committee may have. For further information, please contact me at (202) 512-7215. Other individuals making key contributions to this testimony included Joseph Applebaum, Kimberley Granger, Raun Lazier, George Scott, and Roger Thomas. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Congress passed the Employee Retirement Income Security Act of 1974 (ERISA) to address public concerns over the mismanagement and abuse of private sector employee benefit plans by some plan sponsors and administrators. The Department of Labor's Employee Benefits Security Administration (EBSA) shares responsibility with the Internal Revenue Service and the Pension Benefit Guaranty Corporation for enforcing ERISA. EBSA works to safeguard the economic interest of more than 150 million people who participate in an estimated 6 million employee benefit plans with assets in excess of $4.4 trillion. EBSA plays a primary role in ensuring that employee benefit plans operate in the interests of plan participants, and the effective management of its enforcement program is pivotal to ensuring the economic security of workers and retirees. Recent scandals involving abuses by pension plan fiduciaries and service providers, as well as trading scandals in mutual funds that affected plan participants and other investors, highlight the importance of ensuring that EBSA has an effective and efficient enforcement program. Accordingly, this testimony focuses on describing EBSA's enforcement strategy, EBSA's efforts to address weaknesses in its enforcement program along with the challenges that remain. EBSA's enforcement strategy is a multifaceted approach of targeted plan investigations. To leverage its enforcement resources, EBSA provides education to plan participants and plan sponsors. EBSA allows its regional offices the flexibility to tailor their investigations to address the unique issues in the regions, within a framework established by EBSA's Office of Enforcement. The regional offices then have a significant degree of autonomy in developing and carrying out investigations using a mixture of approaches and techniques they deem most appropriate. Participant leads are still the major source of investigations. EBSA officials told us that they open about 4,000 investigations into actual and potential violations of ERISA annually. To supplement their investigations, the regions conduct outreach activities to educate both plan participants and sponsors. The purpose of these efforts is to gain participants' help in identifying potential violations and to educate sponsors in properly managing their plans and avoiding violations. Finally, EBSA maintains a Voluntary Fiduciary Correction Program through which plan officials can voluntarily report and correct some violations without penalty. EBSA has taken steps to address many of the recommendations we have made over the years to improve its enforcement program, including assessing the level and types of noncompliance with ERISA, improving sharing of best investigative practices, and developing a human capital strategy to better respond changes in its workforce. EBSA reported a significant increase in enforcement results for fiscal year 2004, including $3.1 billion in total monetary results and closing about 4,400 investigations, with nearly 70 percent of those cases resulting in corrections of ERISA violations. Despite this progress, EBSA continues to face a number of significant challenges to its enforcement program, including (1) the lack of timely and reliable plan information, which is highlighted by the fact that EBSA is currently using plan year 2002 and 2003 plan information for its computer targeting, (2) restrictive statutory requirements that limit its ability to assess certain penalties, and (3) the need to better coordinate enforcement strategies with the Securities and Exchange Commission, which is highlighted by recent scandals involving the trading practices and market timing in mutual funds and conflicts of interest by pension consultants.
The Congress passed the Communications Satellite Act of 1962 to promote the creation of a global satellite communications system. As a result of this legislation, the United States joined with 84 other nations in establishing the International Telecommunications Satellite Organization—more commonly known as INTELSAT—roughly 10 years later. Each member nation designated a single telecommunications company to represent its country in the management and financing of INTELSAT. These companies were called signatories to INTELSAT, and were typically government- owned telecommunications companies, such as France Telecom, that provided satellite communications services as well as other domestic communications services. Unlike any of the other nations that originally formed INTELSAT, the United States designated a private company, Comsat Corporation, to serve as its signatory to INTELSAT. During the 1970s and early 1980s, INTELSAT was the only wholesale provider of certain types of global satellite communications services such as international telephone calls and relay of television signals internationally. By the mid-1980s, however, the United States began encouraging the development of commercial satellite communications systems that would compete with INTELSAT. In 1988, PanAmSat was the first commercial company to begin launching satellites in an effort to develop a global satellite system. Within a decade after PanAmSat first entered the market, INTELSAT faced global satellite competitors. Moreover, intermodal competition emerged during the 1980s and 1990s as fiber optic networks were widely deployed on the ground and underwater to provide international communications services. As competition to INTELSAT grew, there was considerable criticism from commercial satellite companies because they believed that INTELSAT enjoyed advantages stemming from its intergovernmental status that made it difficult for other companies to compete in the market. In particular, these companies noted that INTELSAT enjoyed immunity from legal liability and was often not taxed in the various countries that it served. By the mid-1990s, competitors began to argue that for the satellite marketplace to become fully competitive, INTELSAT would need to be privatized so that it would operate like any other company and no longer enjoy such advantages. At about the same time, INTELSAT recognized that privatization would be best for the company. Decision-makers within INTELSAT noted that the cumbersome nature of the intergovernmental decision-making process left the company unable to rapidly respond to changing market conditions. In 1999, INTELSAT announced its decision to privatize and thus become a private corporation. By the late 1990s, the United States government also decided that it would be in the interests of consumers and businesses in the United States for INTELSAT to privatize. The ORBIT Act, enacted in March 2000, was designed to promote a competitive global satellite communication services market. It did so primarily by calling for INTELSAT to be fully privatized. The ORBIT Act required, for example, that INTELSAT be transformed into a privately held, for-profit corporation with a board of directors that would be largely independent of former INTELSAT signatories. Moreover, the act required that the newly privatized Intelsat retain no privileges or other benefits from governments that had previously owned or controlled it. To ensure that this transformation occurred, the Congress imposed certain restrictions on the granting of licenses that allow Intelsat to provide services within the United States. The Congress coupled the issuance of licenses granted by FCC to INTELSAT’s successful privatization under the ORBIT Act. That is, FCC was told to consider compliance with provisions of the ORBIT Act as it made decisions about licensing Intelsat’s domestic operations in the United States. Moreover, FCC was empowered to restrict any satellite operator’s provision of certain new services from the United States to any country that limited market access exclusively to that satellite operator. Market access for satellite firms to non-U.S. markets was also affected by trade agreements that were negotiated during the 1990s. Specifically, the establishment of the World Trade Organization (WTO) on January 1, 1995, with its numerous binding international trade agreements formalized global efforts to open markets to the trade of services. Since that time, WTO has become the principal international forum for discussion, negotiation, and resolution of trade issues. For example, the first global trade agreement that promotes countries’ open and nondiscriminatory market access to services was the General Agreement on Trade in Services (GATS), which provides a legal framework for addressing barriers to international trade and investment in services, and includes specific commitments by member countries to restrict their use of these barriers. Since adoption of a basic telecommunications services protocol by the GATS in 1998, telecommunications trade commitments have also been incorporated into the WTO rules. Such commitments resulted in member countries agreeing to open markets to telecommunications services, such as global satellite communications services. FCC determined that INTELSAT’s July 2001 privatization was in accordance with the ORBIT Act’s requirements and licensed the new private company to provide services within the United States. FCC’s grant of these licenses was conditioned on Intelsat holding an initial public offering (IPO) of securities by October 1, 2001. The Congress and FCC have extended this date three times and the current deadline for the IPO is June 30, 2005. Because Intelsat has not yet completed the IPO, some competing satellite companies have stated that the privatization is not fully complete. Some parties have pointed out that there was a possibility that implementation of the ORBIT Act could have given rise to action arguably inconsistent with commitments that the United States made in international trade agreements. However, we were told that actual implementation avoided such outcomes and no disputes arose. On July 18, 2001, INTELSAT transferred virtually all of its financial assets and liabilities to a private company called Intelsat, Ltd., a holding company incorporated in Bermuda. Intelsat, Ltd. has several subsidiaries, including a U.S.-incorporated indirect subsidiary called Intelsat, LLC. Upon their execution of privatization, INTELSAT signatories received shares of Intelsat, Ltd. in proportion to their investment in the intergovernmental INTELSAT. Two months before the privatization, FCC determined that INTELSAT’s privatization plan was consistent with the requirements of the ORBIT Act for a variety of reasons, including the following. Intelsat, Ltd.’s Shareholders’ Agreement provided sufficient evidence that the company would conduct an IPO, which would in part satisfy the act’s requirement that Intelsat be an independent commercial entity. Intelsat, Ltd. no longer enjoyed the legal privileges or immunities of the intergovernmental INTELSAT, since it was organized under Bermuda law and subject to that country’s tax and legal liability requirements. Both Intelsat, Ltd. and Intelsat, LLC are incorporated in countries that are signatories to the WTO and have laws that secure competition in telecommunications services. Intelsat, Ltd. converted into a stock corporation with a fiduciary board of directors. In particular, FCC said that the boards of directors of both Intelsat, Ltd. and Intelsat, LLC were subject to the laws of Bermuda and the United States, respectively, and that the laws of these countries require boards of directors to have fiduciary obligations to the company. Measures taken to ensure that a majority of the members of Intelsat, Ltd.’s board of directors were not directors, employees, officers, managers, or representatives of any signatory or former signatory of the intergovernmental INTELSAT were consistent with the requirements of the ORBIT Act. Intelsat, Ltd. and its subsidiaries had only arms-length business relationships with certain other entities that obtained INTELSAT’s assets. In light of these findings, FCC conditionally authorized Intelsat, LLC to use its U.S. satellite licenses to provide services within the United States. However, FCC conditioned this authorization on Intelsat, Ltd.’s conducting an IPO of securities as mandated by the ORBIT Act. In December 2003, FCC noted that if Intelsat, Ltd. did not conduct an IPO by the statutory deadline, the agency would limit or deny Intelsat, LLC’s applications or requests and revoke the previous authorizations granting Intelsat, LLC the authority to provide satellite services in the United States. In March 2004, Intelsat, Ltd. filed a registration statement with the Securities and Exchange Commission (SEC) indicating its intention to conduct an IPO. Since that time, however, the Congress further extended the required date by which the IPO must occur. In May 2004, the Congress extended the IPO deadline to June 30, 2005, and authorized FCC to further extend that deadline to December 31, 2005, under certain conditions. In late May 2004, Intelsat withdrew its filing with SEC regarding its registration to conduct an IPO. On August 16, 2004, Intelsat, Ltd. announced that its Board of Directors approved the sale of the company to a consortium of four private investors; the sale requires the approval of shareholders holding 60 percent of Intelsat's outstanding shares and also regulatory approval. According to an Intelsat official, this transaction, if approved, would eliminate former signatories’ ownership in Intelsat. Most companies and experts that we interviewed believe that, to date, Intelsat’s privatization has been in accordance with the ORBIT Act’s requirements, and some of these companies and experts that we interviewed believe that FCC is fulfilling its duties to ensure that the privatization is consistent with the act. These parties noted that the ORBIT Act set forth many requirements for Intelsat and that most of these requirements have been fulfilled. However, some companies and experts believe that the IPO is a key element to complete Intelsat’s privatization. According to some parties, the IPO would further dilute signatory ownership in Intelsat, Ltd. as envisioned by the ORBIT Act, which would reduce any incentive that former signatories might have to favor Intelsat when selecting a company to provide satellite services. Table 1 compares Intelsat, Ltd.’s ownership on the day of privatization in 2001 with the ownership as of May 6, 2004. As indicated in the table, in May 2004, more than 50 percent of Intelsat, Ltd. was owned by the former signatories to the intergovernmental INTELSAT; although, as mentioned above, the recently announced purchase of Intelsat by four private investors, if approved, would eliminate former signatory ownership in Intelsat, according to an Intelsat official. We were told that there were potential inconsistencies between the ORBIT Act and obligations the United States made in international trade agreements. In particular, the ORBIT Act set requirements for INTELSAT’s privatization that, if not met, could have triggered FCC’s denial of licenses that would allow a successor private company to INTELSAT to provide services in the United States once that company was incorporated under foreign law. Some stakeholders told us that, had this occurred, FCC’s actions could have been viewed as inconsistent with U.S. obligations in international trade agreements. In fact, on August 1, 2000, following the enactment of the ORBIT Act, the European Commission (EC) stated that the ORBIT Act raised a general concern regarding its compatibility with the U.S. obligations in the WTO. The EC further emphasized that if the act was going to be used against European Union (EU) interests, the EU would consider exercising its rights to file a trade dispute under the WTO. While we were told that potential inconsistencies could have arisen, INTELSAT privatized according to the ORBIT Act removing any need for FCC to act in a manner that might be inconsistent with U.S. international trade obligations, and no trade disputes arose. Most stakeholders we spoke with generally stated that the ORBIT Act’s requirements have not conflicted with international trade agreements during the privatizations of INTELSAT. Officials from FCC, USTR, the Department of State, as well as satellite company representatives and experts on telecommunications issues, told us that INTELSAT privatized according to the act’s requirements. Several stakeholders emphasized that trade disputes had not arisen because INTELSAT privatized in accordance with the ORBIT Act. As of June 2004, WTO and USTR documentation showed that no trade complaints had been filed at the WTO about the ORBIT Act and INTELSAT’s privatization. Finally, several stakeholders noted that the act had the effect of complementing international trade agreements by seeking to further open and liberalize trade in international satellite communications services. According to most stakeholders and experts we spoke with, access to non- U.S. satellite markets has generally improved during the past decade. In particular, global satellite companies appear less likely now than they were in the past to encounter government restraints or business practices that limit their ability to provide service in non-U.S. markets. All five satellite companies that we spoke with indicated that access to non-U.S. satellite markets has generally improved. Additionally, four experts that we spoke with also told us that market access has generally improved. Most stakeholders that we spoke with attributed the improved access in non-U.S. satellite markets to the WTO and global trade agreements and the trend towards privatization in the global telecommunications industry, rather than to the ORBIT Act. Five satellite companies and four of the experts that we spoke with said that agreements negotiated through the WTO, such as the basic telecommunications commitments, helped improve access in non-U.S. satellite markets. Additionally, two of the satellite companies and one expert told us that the trend towards privatization in the telecommunications industry—such as governments privatizing state- controlled telephone companies—has helped improve market access. At the same time, many stakeholders noted that the ORBIT Act had little to no impact on improving market access. According to several stakeholders, market access was already improving when the ORBIT Act was passed. While some of those we spoke with noted that the ORBIT Act might have complemented the ongoing trends in improved market access, only one satellite company we interviewed stated that the act itself improved market access. This company noted that, by breaking the ownership link between state-owned or monopoly telecommunications companies and Intelsat, the ORBIT Act encouraged non-U.S. telecommunications companies to consider procuring services from competitive satellite companies. Some satellite companies have stated that some market access problems still exist, which they attribute to foreign government policies that limit or slow entry. Some of the companies and experts we spoke with attribute any continuing preference that governments and foreign telecommunications companies may have for doing business with Intelsat to the long-standing business relationships that were forged over a period of time. While some satellite companies believe that FCC should be taking a more proactive approach toward addressing any remaining market access problems in non-U.S. markets, FCC has stated that concerns about these issues provided to them have not been specific enough to warrant an FCC proceeding. Additionally, FCC has stated that many concerns about market access issues would be most appropriately filed with USTR. USTR has received no complaints about access problems by satellite companies in non-U.S. markets in either their annual review of compliance with telecommunications trade agreements, or in comments solicited in the context of ongoing WTO services negotiations. Despite the general view that market access has improved, some satellite companies and experts expressed concerns that market access issues still exist. These companies and experts generally attributed any remaining market access problems to foreign government policies that limit or slow satellite competitors’ access to certain markets. For example: Some companies and experts we spoke with said that some countries have policies that favor domestic satellite providers over other satellite systems and that this can make it difficult for nondomestic companies to provide services in these countries. For example, we were told that some countries require satellite contracts to go first to any domestic satellite providers that can provide the service before other providers are considered. Some companies and one expert we spoke with said that because some countries carefully control and monitor the content that is provided within their borders, the countries’ policies may limit certain satellite companies’ access to their markets. Several companies and an expert we interviewed said that many countries have time-consuming or costly approval processes for satellite companies. In particular, we were told that some countries have bureaucratic processes for licensing and other necessary business activities that make it time-consuming and costly for satellite companies to gain access to these markets. Some stakeholders believe that Intelsat may benefit from legacy business relationships. For approximately 30 years, INTELSAT was the dominant provider of global satellite services. Moreover, until 2001, INTELSAT was an intergovernmental organization, funded and controlled through signatories—often state-controlled telecommunications companies—of the member governments. Several stakeholders noted that Intelsat may benefit from the long-term business relationships that were forged over the decades, since telecommunications companies in many countries will feel comfortable continuing to do business with Intelsat as they have for years. Additionally, two of the satellite companies noted that because some of these companies have been investors in the privatized Intelsat, there may be an incentive to favor Intelsat over other satellite competitors. One global satellite company told us that Intelsat’s market access advantages continue because of inertia—inertia that will only dissipate with time. Two stakeholders also noted that because companies—including domestic telecommunications providers as well as direct customers of satellite services—have plant and equipment as well as proprietary satellite technology in place to receive satellite services from Intelsat, it might cost a significant amount of money for companies to replace equipment in order to use satellite services from a different satellite provider. These legacy advantages can make it more difficult for satellite companies to convince telecommunications companies to switch from Intelsat’s service to their service. However, some other companies have a different view on whether Intelsat has any preferential or exclusive market access advantages. Representatives of Intelsat, Ltd. told us that Intelsat seeks market access on a transparent and nondiscriminatory basis and that Intelsat has participated with other satellite operators, through various trade organizations, to lobby governments to open their markets. Representatives of Intelsat, Ltd. also told us that former signatories of Intelsat own such small percentages of Intelsat, Ltd. that such ownership interests would not likely influence market access decisions in countries in which the government still controls the former signatory. Some companies and many of the experts that we interviewed told us that, in their view, Intelsat does not have preferential access to non-U.S. satellite markets. Further, all five satellite companies as well as several experts that we spoke with said that they have no knowledge that Intelsat in any way seeks or accepts exclusive market access arrangements or attempts to block competitors’ access to non-U.S. satellite markets. While Intelsat is the sole provider of satellite service into certain countries, we were generally told that traffic into some countries is “thin”—that is, there is not much traffic, and therefore there is little revenue potential. In such cases, global satellite companies other than Intelsat may not be interested in providing service to these countries. Thus, the lack of competition in some non-U.S. satellite markets does not necessarily indicate the presence of barriers to market access for competitive satellite companies. Some of the companies we spoke with believe that FCC should take a more proactive role in improving access for satellite companies in non-U.S. markets. In particular, some satellite companies and an expert we spoke with indicated that FCC has not done enough to appropriately implement the ORBIT Act because, in their view, the ORBIT Act shifted the burden to FCC to investigate and prevent access issues, rather than solely to adjudicate concerns brought before it. One satellite company said that section 648 of the ORBIT Act, which prohibits any satellite operator from acquiring or enjoying an exclusive arrangement for service to or from the United States, provides a vehicle for FCC to investigate the status of access for satellite companies to other countries’ markets. If FCC were to find a violation of section 648, it would have the authority to withdraw or modify the relevant company’s licenses to provide services within the U.S. market. Another satellite company told us that FCC should conduct an ORBIT Act inquiry under the privatization sections of the act to address any market access issues that might arise if Intelsat has preferential market access related to any remaining advantages from its previous intergovernmental status. Certain other companies, experts, and FCC told us that nothing to date has occurred that would require additional FCC actions regarding the implementation of the ORBIT Act. FCC officials told us that they do not believe that FCC should undertake investigations of market access concerns without specific evidence of violations of section 648 of the ORBIT Act. While some comments filed with FCC in proceedings on Intelsat’s licensing and for FCC’s annual report on the ORBIT Act raise concerns about market access, FCC has stated that these filings amount only to general allegations and fall short of alleging any specific statutory violation that would form a basis sufficient to trigger an FCC enforcement action. Some companies and experts that we spoke with agreed that no evidence of a market access problem has been put forth that would warrant an FCC investigation under the ORBIT Act. Even the satellite companies that complained to FCC in the context of Intelsat’s licensing proceedings told us that they had not made any formal complaints of ORBIT Act violations or asked FCC to initiate a proceeding on the matter. Additionally, FCC told us that broad market access concerns are most appropriately handled by USTR through the WTO. USTR has received no complaints about access problems by satellite companies in non-U.S. markets in either their annual review of compliance with telecommunications trade agreements, or in comments solicited in the context of ongoing WTO services negotiations. We provided a draft of this report to the Federal Communications Commission (FCC), the Department of State, the National Telecommunications and Information Administration (NTIA) of the Department of Commerce, and the United States Trade Representative (USTR) for their review and comment. FCC did not provide comments. USTR and the Department of State provided technical comments that were incorporated into the report. NTIA also provided technical comments that were incorporated into the report as appropriate and also sent formal comments in a letter, which appears in appendix II. In its formal comments, NTIA stated that they generally agree with the findings of our report and remain interested in developments regarding Intelsat’s further plans to pursue a private equity buyout. We also invited representatives from five companies to review and comment on a draft of this report. These companies included: Intelsat, Ltd.; Lockheed Martin Corporation; PanAmSat Corporation; SES Americom Inc.; and New Skies Satellites N.V. New Skies and PanAmSat did not provide comments on the draft report. Both Lockheed Martin and Intelsat provided technical comments that we incorporated as appropriate. SES Americom provided both technical comments—which we addressed as appropriate— and substantive comments that expressed concerns about our characterization of some of the issues discussed in this report. The comments from SES Americom and our response are contained in appendix I. As agreed with your offices, unless you publicly release its contents earlier, we plan no further distribution of this report until 15 days after the date of this letter. At that time, we will provide copies to interested congressional committees; the Chairman, FCC; and other interested parties. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov or Amy Abramowitz at (202) 512-2834. Major contributors to this report include Amy Abramowitz, Michael Clements, Emil Friberg, Bert Japikse, Logan Kleier, Richard Seldin, and Juan Tapia-Videla. SES Americom Inc. provided several comments on the draft report. While several were minor technical comments, which we incorporated as appropriate, some of the comments were of a more substantive nature. This appendix provides a summary of the substantive comments and GAO’s response to those comments. SES Americom stated that while GAO notes that several companies have stated that Intelsat’s privatization is not complete until the IPO occurs, GAO fails to note that FCC’s International Bureau has also stated this to be the case. GAO response: Our discussion of FCC’s authorization of licenses for Intelsat to operate in the U.S. makes clear that FCC provided these licenses on a conditional basis because the required IPO had yet to occur. SES Americom states that GAO’s discussion of possible preferences countries and businesses may have for doing business with Intelsat does not fully explain why this may occur. While SES notes that GAO correctly attributes possible preferences to long term business relationships companies/countries may have with Intelsat, SES Americom believes that GAO should mention that possible preferences also arise because Intelsat’s customers have equipment suitable solely for use with Intelsat satellites. GAO response: Regarding customer equipment, we mention that companies have plant and equipment in place to receive service from Intelsat that might cost a significant amount of money to replace, which we believe adequately addresses this point. SES Americom states that GAO should preface our discussion of the required IPO with the word “equity”. GAO response: The ORBIT Act’s requirement for an IPO does not specifically state “equity IPO,” but states that Intelsat must hold an “IPO of securities.” Nevertheless, in the context of Inmarsat’s required IPO, which is also required under the ORBIT Act, FCC is currently reviewing this very issue—that is, whether the IPO must be an offering of equity securities. Thus, FCC’s decision will determine how this will be interpreted. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
In 2000, the Congress passed the Open-market Reorganization for the Betterment of International Telecommunications Act (ORBIT Act) to help promote a more competitive global satellite services market. The ORBIT Act called for the full privatization of INTELSAT, a former intergovernmental organization that provided international satellite services. GAO agreed to provide federal officials' and stakeholders' views on (1) whether the privatization steps required by the ORBIT Act have been implemented and whether there were potential inconsistencies between ORBIT Act requirements and U.S. obligations made in international trade agreements; (2) whether access by global satellite companies to non-U.S. markets has improved since the enactment of the ORBIT Act and, if so, to what is this generally attributed; and (3) if any market access problems remain, what role does the Federal Communications Commission (FCC) have in addressing those problems under the ORBIT Act. Most of INTELSAT's privatization steps have taken place and a variety of stakeholders told us that implementation of the ORBIT Act was not inconsistent with the commitments that the United States made in international trade agreements. In July 2001, INTELSAT transferred its satellite and financial assets to a private company. FCC determined that this and other actions satisfied the ORBIT Act requirements for INTELSAT's privatization but noted that the company must hold an initial public offering (IPO) of securities by a required date. The current deadline for the IPO is June 30, 2005. Because Intelsat has not completed the IPO, some satellite companies assert that privatization is not fully complete. Some parties have pointed out that there was a possibility that implementation of the ORBIT Act could have given rise to action arguably inconsistent with commitments that the United States made in international trade agreements. However, we were told that actual implementation avoided such outcomes and no disputes arose. Most stakeholders and experts that GAO spoke with believe that access to non-U.S. satellite markets has improved, but few attribute this improvement to the ORBIT Act. These stakeholders and experts said that global trade agreements, such as the WTO's basic telecommunications commitments, and the global trend towards privatization of telecommunications companies have improved access in non-U.S. markets. Several stakeholders and experts told GAO that improvements in market access were already underway when the Congress passed the ORBIT Act and that the act has complemented ongoing trends towards more open satellite markets. Some satellite companies report continuing market access problems, but there are disagreements regarding whether FCC should investigate and resolve these problems. Some satellite companies that GAO spoke with report problems with access to non-U.S. satellite markets, which they attribute to countries with policies that favor domestic and regional satellite companies, countries exercising control over content, bureaucratic processes in various countries, and long-term business relationships between INTELSAT and various telecommunications companies. Most companies GAO spoke with report that Intelsat does not take active steps to acquire preferential or exclusive market access, and Intelsat itself stated that it does not seek nor, if offered, would accept preferential market access. Finally, some companies suggest that FCC should take a more proactive role in investigating market access problems, rather than assuming an adjudicative role. FCC said that evidence provided to the agency has not been sufficient to warrant action and also suggested that trade disputes are more appropriately addressed by the United States Trade Representative.
Prepositioned equipment and supplies are strategic assets, along with sealift and airlift, for projecting military power. These assets include combat equipment, spare parts, and sustainment supplies that are stored on ships and on land in locations around the world to enable the rapid fielding of combat-ready forces. (App. I provides an overview of the military services’ prepositioned assets and their locations.) DOD has made significant investments in its military prepositioning programs, totaling several billion dollars in annual acquisition costs. In addition, the services have collectively used an average of over $1 billion each year to operate and maintain these assets. For example, in fiscal year 2005, the Army spent $386.1 million for storage and maintenance of prepositioned assets, including $76.5 million for assets in South Korea and $38.3 million for assets in Southwest Asia. Prepositioned assets have been used extensively to support operations in Iraq and Afghanistan. The Marine Corps used equipment from two of its three prepositioned squadrons to support these operations. The Army used nearly all of its prepositioned ship stocks and land-based stocks in Kuwait and Qatar, in addition to drawing some equipment from Europe. Military equipment and infrastructure are often located in corrosive environments that increase the deterioration of assets and shorten their useful life. The extensive and long-term deployments of U.S. troops in Southwest Asia are likely to magnify the effects of corrosion on military equipment, including prepositioned assets, because of the region’s harsh operating environment. Higher rates of corrosion result in increased repairs and replacements, drive up costs, and take critical systems out of action, reducing mission readiness. Corrosion can also reduce the safety of equipment items. Although reliable cost data are not available, estimates of corrosion costs DOD-wide have ranged from $10 billion to $20 billion annually. We have found in our prior work that DOD and the military services did not have an effective management approach to mitigate and prevent corrosion. We recommended that DOD develop a departmentwide strategic plan with clearly defined goals, measurable outcome-oriented objectives, and performance measures. DOD concurred and in December 2003 issued its corrosion strategy. According to DOD’s corrosion strategy, knowing the costs of corrosion is essential to adequately implementing the strategy, and having corrosion data helps the department learn what works so it can be more effective in reducing corrosion. In addition, the Defense Science Board in 2004 stated that “accurate and objective corrosion data collection and new incentives to reward life-cycle cost reduction efforts must be implemented” as part of an effective corrosion control program and that such data are critical “not only to understand the depth of the problem, but to enable a quantitative corrosion mitigation strategy, which is founded on fact.” The Army and Marine Corps have taken some measures to reduce the impact of corrosion on prepositioned assets, but the Army could increase its use of storage facilities for land-based assets. Prepositioned equipment drawn by Army and Marine Corps units for military operations in Iraq during 2003 had mostly been stored in humidity-controlled facilities and was reported to be in good operating condition and was not degraded by corrosion. The primary measure taken to reduce corrosion and achieve this good operating condition was the use of humidity-controlled storage facilities. However, we identified several locations where the Army is currently storing a substantial portion of its prepositioned equipment outdoors. Temporary shelters may be a feasible option to address immediate storage needs. When prepositioned equipment was drawn by Army and Marine Corps units in military operations in Iraq during 2003, it was reported to be in good working condition and was not degraded by corrosion. Army officials from the 3rd Infantry Division have stated that with the exception of rubber seals on some vehicles, prepositioned equipment entering Southwest Asia was in good shape and had minimal, if any, corrosion. These officials said they did not experience any corrosion that affected their ability to perform operations. Similarly, officials with the 1st and 2nd Marine Expeditionary Forces who used or observed the use of prepositioned equipment in Southwest Asia found it was in a high state of readiness and could not recall any instance where corrosion affected their ability to perform operations. Furthermore, officials with the 3rd Marine Expeditionary Force said the equipment on the prepositioning ship USNS Lummus that was used in a 2004 training exercise in South Korea was generally in the same good operating condition it was when first uploaded about 2 years previously. These officials stated that subsequent maintenance in August 2005 confirmed that the equipment continued to be in good operating condition based on a detailed examination of about 200 pieces of this equipment. They told us that with the exception of minor hydraulic leaks and o-ring deterioration, the equipment was generally free of corrosion problems. The primary measure to reduce corrosion of Army and Marine Corps prepositioned assets has been the use of humidity-controlled storage facilities. Most of the prepositioned equipment drawn for military operations in Iraq during 2003 had been stored, either afloat or on land, in such facilities. Under Army policy, the preferred method for storing prepositioned assets is in humidity-controlled facilities because such storage is considered highly effective in preserving equipment. Maintaining low humidity levels reduces corrosion because moisture is a primary cause of corrosion. Similarly, Marine Corps policies indicate that equipment should be sheltered in climate-controlled facilities to the greatest extent possible. Army and Marine Corps officials told us that the use of humidity-controlled facilities is effective at minimizing equipment corrosion and maintaining high readiness levels. Army equipment on prepositioning ships is stored below deck in humidity-controlled cargo space. In addition, the Army stores some of its land-based prepositioned equipment in humidity-controlled warehouses. Marine Corps prepositioned assets are stored in humidity-controlled facilities either on ships or in caves in Norway. Humidity levels, particularly on ships, are required under Army and Marine Corps guidelines to stay within a specific range on a continuous basis and are closely monitored. In addition to humidity-controlled storage, the Army and Marine Corps have taken other measures intended to help reduce the impact of corrosion on prepositioned assets. Army and Marine Corps policies require that repaired equipment be restored to good condition before being placed in prepositioned status. Specifically, Army maintenance regulations require prepositioned equipment to be maintained at “10/20” standards, the highest standard the Army has for equipment maintenance. Army maintenance regulations also provide for the use of lubricants and preservatives, as well as regular inspections. Marine Corps policy indicates that all equipment generally will be in “Code A” condition at the time it is placed in storage. Code A means the equipment is serviceable without any limitation or restriction. Marine Corps officials told us equipment meeting this standard would have little to no corrosion. Marine Corps maintenance guidance for prepositioned equipment consists of a variety of corrosion prevention and mitigation measures, including visual inspections for leaks, corrosion removal and recoating, and preservation. For equipment stored on the prepositioned ships, inspections are conducted on a periodic basis. Both Army and Marine Corps officials said corrosion is routinely treated as part of the maintenance process for restoring equipment to meet standards. We identified several locations where the Army is storing a significant amount of land-based prepositioned assets outdoors without adequate sheltering. Specifically, we found equipment being stored outdoors at Camp Carroll, South Korea; Camp Arifjan, Kuwait; and Goose Creek, South Carolina. At these locations, assets are left relatively unprotected from moisture, sand, and other elements that contribute to corrosion. Army officials noted that unprotected equipment corrodes faster and will more quickly fall below required maintenance condition standards. At Camp Carroll in South Korea, about 30 percent of the Army’s Heavy Brigade Combat Team equipment—mostly sustainment stock—is stored outdoors in an often damp and humid region. The remaining equipment is stored in humidity-controlled facilities. Army officials told us that the equipment had been poorly maintained and, as a result, experienced many significant defects and readiness shortfalls, with corrosion being one of the primary problems. These officials said some of the equipment corroded faster and more severely because of being stored outside and, as a result, the Army incurred additional maintenance costs. Army officials in South Korea noted that it costs more to maintain equipment that is stored outside in part because the equipment needs to be inspected three times more often than equipment in humidity-controlled storage. Large amounts of Army prepositioned equipment are also stored outside in Kuwait where, according to DOD and Army officials, the environment is highly corrosive because of the humid climate, sand with high salinity levels, and strong winds. As of April 2006, the Army was storing outside nearly all of its prepositioned assets (numbering about 11,000 items) in Southwest Asia. At the Army’s prepositioning afloat facility in Goose Creek, South Carolina, equipment is stored outside during the time it is not undergoing maintenance because of a lack of storage facilities. The amount of time equipment is stored outside ranges, on average, from 1 month to more than 3 months. In some cases, equipment is stored outside well over 3 months. For example, 44 M1A1 tanks and 10 fuel tankers sat outdoors for more than a year after undergoing maintenance and experienced a total of $1.2 million in corrosion-related damage. Army officials said that prolonged periods of outdoor storage as happened in this case rarely occur, but that some period of outdoor storage is expected for equipment waiting upload. Army officials acknowledged having an immediate need for additional sheltering, preferably with humidity control capability, for prepositioned equipment located in South Korea, Kuwait, and South Carolina. However, under current construction plans, additional storage facilities will not be available at all three sites until 2012 at the earliest. In South Korea and Kuwait, Army officials said that even with the additional planned storage facilities, substantial amounts of equipment will still be stored outdoors. For example, officials estimated about 20 percent of equipment in Kuwait will remain outside. Officials cited competing funding priorities as the primary reason for not providing indoor storage for all land-based prepositioned assets. Army officials also cited uncertainties regarding the number and type of equipment and length of time it is stored, which make it difficult to accurately define storage requirements and justify funding for construction of additional storage facilities. In South Korea, Army officials told us the lack of available land limits their ability to construct new, or expand existing, facilities. These officials also said that estimating storage needs is difficult because of uncertainties regarding the consolidation and reconfiguration of U.S. Forces Korea facilities related to future force restructuring. Army prepositioning afloat officials said that the Goose Creek facility primarily is a maintenance facility and is not meant for the storage of equipment, which makes it difficult to justify the building of new storage space. Although building additional storage will require Army investment, the use of humidity-controlled storage in general has been shown to provide a substantial return on investment. According to a study by the Army Cost and Economic Analysis Center, sheltering Army National Guard equipment in a humidity-controlled facility had a potential return on investment of a minimum of $8 for every $1 invested. The Army National Guard also estimates that it will have achieved a total of over $1.2 billion in cost savings by fiscal year 2010. Most of the projected savings is based on having to perform less maintenance on equipment that is being preserved better in humidity-controlled facilities. The humidity-controlled sheltering program includes combat vehicles, trailers, radar systems, and other equipment located at Guard facilities in 45 states and U.S. territories. According to Army storage and maintenance guidelines, storage of equipment in facilities without humidity control—particularly in open storage without protection—not only invites greater and more rapid deterioration because of corrosion but requires increased surveillance, inspections, and maintenance. For example, whereas combat vehicles in humidity-controlled facilities need to be exercised and road tested every 30 months, vehicles stored without humidity control require exercising every 12 months. One of the benefits of humidity control is avoiding or at least minimizing these increased maintenance requirements. Given the competing funding priorities and other constraints cited by Army officials in providing additional storage facilities for prepositioned equipment, temporary shelters may be a feasible option to address immediate storage needs. Temporary shelters are available in a range of sizes, materials, and features, including humidity control. For example, “K-SPAN” temporary shelters are steel structures constructed on-site and set over a concrete foundation. These shelters may be dismantled, packaged, and relocated. Army officials told us that temporary shelters are used primarily in situations where immediate storage is required but may be durable enough to last for several years. Furthermore, they can be acquired faster than permanent facilities, which may take several years to plan, fund, and build. The military services have made prior use of temporary shelters in several locations, for both prepositioned and non- prepositioned equipment. For example, the Marine Corps uses temporary humidity-controlled facilities in Florida to store some of its prepositioned assets awaiting maintenance and upload to ships. In addition, the Army has stored prepositioned equipment in temporary shelters located in Livorno, Italy, and Camp Carroll, South Korea. The Marine Corps has also used temporary shelters to store non-prepositioned equipment in Hawaii. The lack of available corrosion data impairs the ability of the Army and Marine Corps to achieve long-term costs savings through corrosion prevention and mitigation efforts. The Army and Marine Corps consider collection of corrosion data on prepositioned assets to be a low priority and, consequently, do not systematically collect them. These data could be used to support additional prevention and mitigation efforts that achieve long-term cost savings, similar to the Army’s previous success using corrosion data regarding non-prepositioning programs. Corrosion-related data that could enhance efforts to prevent and mitigate corrosion of prepositioned assets is unavailable because the Army and Marine Corps consider collection of this information to be a low priority and, consequently, do not systematically collect it. Army regulations require units to collect corrosion-related data as part of their equipment maintenance and storage programs, while the Marine Corps generally lacks requirements for collection of corrosion-related data. For example, the Army’s Corrosion Prevention and Control Program regulation includes a requirement for a corrosion-related survey of all divisions and separate combat brigades to be conducted at least every 4 years. In addition, Army policy on reporting equipment quality deficiencies includes a requirement to report problems that are corrosion related. The Marine Corps, on the other hand, does not require the collection of corrosion information for all equipment, but believes it to be beneficial. The mission of the Marine Corps’ Corrosion Prevention and Control Program is to reduce maintenance requirements and costs associated with corrosion, and the program seeks to identify and assess current and projected corrosion problems for all tactical ground and ground support equipment. Marine Corps officials said that the desire for the collection of corrosion information applies to all Marine Corps activities, including prepositioning programs, but acknowledge that data are not collected on prepositioned assets because they have a low priority. Corrosion data could be used to help identify underlying causes of maintenance problems and obtain a better understanding of the costs of corrosion and the extent it affects readiness. Despite Army corrosion data collection requirements and the establishment of corrosion prevention and control programs in the Army and Marine Corps, we found that information about corrosion of prepositioned assets is generally lacking in both services. We reviewed a wide range of reports and other documentation on Army and Marine Corps prepositioned equipment and found these to be almost devoid of corrosion-related data. For example, we examined information on the maintenance condition and repair actions for prepositioned equipment from the Army Maintenance Management System, but this system did not contain information regarding the extent and nature of equipment corrosion. Likewise, the cost data on prepositioned equipment contained in the Marine Corps’ Standard Accounting, Budgeting and Reporting System, which contains total maintenance and repair costs for all prepositioned equipment, also did not include information specifically on corrosion costs. We also asked the Army and Marine Corps for information regarding the impact of corrosion on maintenance costs, equipment deficiencies, inventory levels, and readiness rates. In almost every instance, this corrosion information was not available. As we have previously reported, DOD and the military services generally have a limited amount of corrosion data related to cost estimates, readiness, and safety data. According to Army and Marine Corps officials, corrosion information on prepositioned assets is unavailable primarily because it has low priority. Although Army guidance for documenting equipment maintenance includes detailed instructions for reporting corrosion issues, Army officials said most of those responsible for documenting the maintenance action do not want to take the extra time to include corrosion information because they see it as having minimal value and have no incentive to collect it. Similarly, Marine Corps officials stated that there is minimal incentive to capture and report corrosion costs for prepositioned equipment because maintenance costs are typically managed at more general levels, such as the costs to repair or replace a piece of equipment. Officials from both the Army and the Marine Corps said that corrosion is routinely treated as part of the overall maintenance process, and corrosion-related data are not tracked separately. For example, Army officials at Camp Carroll, South Korea, told us that corrosion observed on the engine blocks in 5-ton trucks would be repaired during maintenance performed on the entire engine and would not be noted in the maintenance logs. Instead, documentation of the maintenance actions would include a description of the equipment or component and why it was not functional—such as being broken or cracked—but would not include the reason for the repair, such as corrosion. According to Marine Corps officials, corrosion information has value but not enough to be included with more critical information, such as the amount of equipment in the inventory and amount in serviceable condition. Although the Army and Marine Corps are not collecting data about the current costs to prevent and mitigate corrosion of prepositioned assets, the military services have estimated that at least 25 percent of overall maintenance costs are corrosion related and that as much as one-third of these costs could be reduced through more effective corrosion prevention and mitigation. Army and Marine Corps officials told us that this estimate applies to both prepositioned and non-prepositioned assets because corrosion affects both types of equipment in similar ways. Because of the lack of available cost data, the Army, at our request, conducted a limited review of maintenance records for about 2,000 pieces of prepositioned stock in South Korea. The Army determined that about $8.7 million (31 percent) of the estimated $28 million spent to restore this equipment to serviceable condition was used to address corrosion-related problems. As another indication of corrosion costs, Marine Corps officials estimated that corrosion costs make up at least 50 percent of the $110,000 needed, on average, to repair motorized lighterage prepositioned equipment. The additional information that would be obtained through the collection of corrosion data could support the Army’s and Marine Corps’ efforts to more effectively prevent and mitigate corrosion and achieve long-term cost savings, which could be significant given the resources the military services devote each year to addressing corrosion-related problems. Corrosion prevention measures may reduce the amount of maintenance needed, thereby extending the availability of equipment items over their life cycle. The Army has had previous success using corrosion data regarding non-prepositioning programs to support corrosion prevention and mitigation efforts that achieved long-term cost savings. For example, the Army National Guard began the initial phase of a humidity-controlled storage program for its vehicles and equipment in 1994. Guard officials told us that they collected and analyzed an extensive amount of information on corrosion and its cost impacts on selected pieces of equipment and estimated that a significant amount of corrosion-related costs could be avoided by using humidity-controlled storage facilities. Program officials currently estimate that the sheltering and preservation effort will save a total of about $1.2 billion through fiscal year 2010, which reflects a 9 to 1 return on investment. Army officials cited similar results after collecting corrosion data on Hellfire missile launchers. The types and areas of the launchers that were most prone to corrosion—such as missile safety/arming switches—were identified and documented. Based on this research, maintenance technicians knew better to look for corrosion and how to control it before it worsened. The Army Missile Command’s tactical missile program executive office attributed a large portion of its $3.2 billion overall long-term life cycle savings to the Hellfire corrosion prevention measures. Collection of corrosion data for prepositioned equipment could better enable the Army and Marine Corps to support similar corrosion prevention and mitigation efforts in their prepositioning programs. Effectively addressing corrosion on prepositioned stocks of equipment can enable the services to achieve significant cost savings and increase readiness and safety for rapidly fielding combat-ready forces around the world. Although the Army and Marine Corps have taken measures to reduce the impact of corrosion on prepositioned assets, there are immediate opportunities for taking additional action. Sheltering assets— especially sheltering in humidity-controlled facilities—has been shown to be a key anticorrosion practice, yet large amounts of Army land-based prepositioned assets are stored outdoors without adequate sheltering. This practice is wasteful given the large investment in acquiring the equipment and the annual costs of maintaining it. Furthermore, while the Army and Marine Corps do not collect corrosion data for prepositioned equipment, the collection of such data could provide additional information to identify the underlying causes of maintenance problems and develop solutions to address these problems. Without such data, the services may lack the incentive to support efforts to more effectively prevent and mitigate corrosion and achieve long-term cost savings. Until the Army and Marine Corps take additional actions to prevent corrosion, such as implementing use of temporary shelters to the greatest extent feasible and collecting corrosion-related data, prepositioned equipment stored outdoors will continue to corrode at an accelerated pace and the services will continue to incur unnecessary costs for maintaining equipment and repairing corrosion damage. To reduce the impact of corrosion on prepositioned assets and support additional corrosion prevention and mitigation efforts, we recommend that the Secretary of Defense take the following three actions: Direct the Secretary of the Army to examine the feasibility of using temporary shelters, including humidity-controlled facilities, to store land-based prepositioned assets currently stored outdoors, and if such use is determined to be feasible, to take appropriate actions to implement the use of shelters to the maximum extent possible. Direct the Secretary of the Army to collect corrosion-related data, as required in existing Army regulations, and use these data to support additional corrosion prevention and mitigation efforts. Direct the Commandant of the Marine Corps to require the collection of corrosion-related data and use these data to support additional corrosion prevention and mitigation efforts. We also recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to specify the department’s planned actions, milestones, and resources for completing an Army feasibility study on the use of temporary shelters to store land-based prepositioned assets and for collecting and using Army and Marine Corps corrosion-related data to support additional corrosion prevention and mitigation efforts. In commenting on a draft of this report, DOD concurred with our recommendations that the Army consider the feasibility of using temporary shelters, including humidity-controlled facilities, to store land- based prepositioned assets currently stored outdoors and that the Army and Marine Corps collect and use corrosion-related data to support additional corrosion prevention and mitigation efforts. However, DOD did not provide specific information on planned actions, milestones, and resources for implementing the recommendations. With respect to the Marine Corps, DOD stated that collection of adequate data is not a matter of being a low priority but a funding issue. As noted in our report, we were told by Marine Corps officials that collection of these data has been a low priority. We believe that funding and priorities should be aligned to the greatest extent possible to provide greater assurance that the department’s resources are being used prudently. As stated in our report, DOD can achieve long-term cost savings by investing in additional corrosion prevention and mitigation efforts. In addition, investments in corrosion prevention measures may reduce the amount of maintenance needed on equipment items, thereby extending the availability of equipment items over their life cycle. On the basis of our evaluation of DOD’s comments, we have added a recommendation that DOD specify actions, milestones, and resources for implementing our recommendations to the Army and the Marine Corps. DOD’s comments are reprinted in appendix II. We focused our review on the prepositioned assets managed by the Army and Marine Corps because these two services have the majority of the military’s prepositioned assets, and these services provided most of the equipment used in current operations in Southeast Asia. To assess the measures taken by the Army and Marine Corps to reduce the impact that corrosion has on prepositioned assets, we met with DOD and service command officials responsible for managing and maintaining prepositioned assets; obtained their assessments and perspectives on corrosion prevention and mitigation programs and strategies; and obtained and reviewed DOD and service policies, procedures, and practices, including technical orders and manuals, for managing and maintaining prepositioned assets. We met with DOD officials involved with developing DOD’s long-term strategy to prevent and control corrosion. We also discussed additional actions that could be taken to further prevent and mitigate corrosion. In addition, we visited selected prepositioning locations and maintenance facilities, including the Army’s facilities in Goose Creek, South Carolina, and Camp Carroll, South Korea, and the Marine Corps Logistics Command in Albany, Georgia, and Blount Island Command in Jacksonville, Florida. To assess the availability of corrosion-related data to the Army and Marine Corps to support corrosion prevention and mitigation efforts for prepositioned assets, we met with DOD and service command officials responsible for managing and maintaining prepositioned assets, and obtained and reviewed DOD and military service policies and procedures for collecting and reporting maintenance costs and related equipment material condition information. We obtained and analyzed various cost and maintenance reports on these assets, including inspection and maintenance logs, databases and assessments, and after-action reports. In particular, we discussed the barriers that exist to identifying and quantifying the impact of corrosion on prepostioned assets’ maintenance costs and material condition, and the metrics and related information systems needed to better collect, track, report, and manage efforts to prevent and mitigate corrosion as well as quantify the related funding requirements to address this issue. We interviewed officials and obtained documentation at the following locations: Office of the Secretary of Defense Corrosion Policy and Oversight Office Headquarters, Department of the Army U.S. Army Materiel Command, Fort Belvoir, Virginia Tank-Automotive and Armaments Command, Warren, Michigan, and Rock Island, Illinois U.S. Army Field Support Command, Rock Island, Illinois U.S. III Army Corps, Fort Hood, Texas U.S. Army Field Support Battalion Afloat, Goose Creek, South U.S. Forces Korea and Eighth U.S. Army, Yongsan Garrison, South U.S. Army Field Support Battalion Far East, Camp Carroll, Materiel Support Center Korea, Camp Carroll, Waegwan, South Korea 19th Theater Support Command, Camp Walker, Daegu, South Korea U.S. Army Pacific, Fort Shafter, Hawaii U.S. Marine Corps Headquarters U.S. Marine Corps Forces, Pacific, Hawaii I Marine Expeditionary Force, Camp Pendleton, California II Marine Expeditionary Force, Camp Lejune, North Carolina III Marine Expeditionary Force, Okinawa, Japan Marine Corps Systems Command, Quantico, Virginia Marine Corps Logistics Command, Albany, Georgia Blount Island Command, Jacksonville, Florida Office of the Inspector General of the Marine Corps Bureau of Medicine and Surgery Naval Facilities Engineering Command CNA Corporation, Alexandria, Virginia U.S. Navy Inspector General Naval Air Systems Command, Office of the Inspector General, Naval Audit Service Naval Medical Logistics Command, Fort Detrick, Maryland Navy Expeditionary Medical Command, Cheatham Annex, Headquarters, Seventh Air Force, South Korea United States Pacific Command United States Forces Korea We conducted our work from May 2005 through February 2006 in accordance with generally accepted government auditing standards. We reviewed available data for inconsistencies and discussed the data with DOD and service officials. We determined that the data used for our review were sufficiently reliable for our purposes. We are sending copies of this report to the Secretary of Defense, the Secretary of the Army, and the Commandant of the Marine Corps. We will also make copies available to others upon request. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions, please contact me at (202) 512- 8365. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The military services have prepositioning programs to store combat or support equipment and supplies near areas with a high potential for conflict and to speed response times and reduce the strain on other mobility assets. The Army’s program involves three primary categories of stocks: combat brigade sets, operational projects, and war reserve sustainment stocks stored at land sites and aboard prepositioning ships around the world. The Marine Corps also prepositions equipment and supplies aboard prepositioning ships and at land sites in Norway. The Navy’s prepositioning efforts are comparatively small, used mainly to support the Marine Corps’ prepositioning program and deploying forces. The Navy prepositions equipment and supplies at land sites and aboard the maritime prepositioning ships. The Air Force prepositions stocks of war reserve equipment and supplies to meet initial contingency requirements and to sustain early deploying forces. The Air Force’s prepositioned war reserve stocks include bare base sets; vehicles; munitions; and a variety of consumable supplies, such as rations, fuel, support equipment, aircraft accessories, and medical supplies. The services’ prepositioning programs are briefly described in table 1. The military services store these stocks of equipment and supplies at several land sites and aboard prepositioning ships around the world. Most of the military services store equipment and supplies in Southwest Asia, the Pacific theater, Europe, and aboard prepositioning ships. Figure 1 shows the major locations of prepositioned stocks. In addition to the contact named above, Thomas Gosling, Assistant Director; Larry Bridges; Renee Brown; Lisa Canini; Amy Sheller; Allen Westheimer; and Tim Wilson were major contributors to this report.
The military services store prepositioned stocks of equipment and material on ships and land in locations around the world to enable the rapid fielding of combat-ready forces. GAO's prior work has shown that the readiness and safety of military equipment can be severely degraded by corrosion and that the Department of Defense (DOD) spends billions of dollars annually to address corrosion. GAO was asked to review the impact of corrosion on prepositioned assets. GAO's specific objectives were to assess (1) the measures taken by the Army and the Marine Corps to reduce the impact of corrosion on prepositioned assets and (2) the availability of corrosion-related data to the Army and the Marine Corps to support corrosion prevention and mitigation efforts for prepositioned assets. The Army and Marine Corps have taken some measures to reduce the impact of corrosion on prepositioned assets, primarily through the use of humidity-controlled storage facilities on ships and in some land-based locations, but a substantial portion of Army land-based prepositioned assets are stored outdoors and are left relatively unprotected from elements that contribute to corrosion. When equipment was drawn for military operations for Operation Iraqi Freedom during 2003, it was reported in good operating condition and not degraded by corrosion. Most of this equipment had been stored in humidity-controlled facilities. However, whereas all Marine Corps prepositioned assets are stored in humidity-controlled facilities, the Army currently stores a significant amount of its land-based prepositioned assets outdoors. Under Army policy, the preferred method for storing prepositioned assets is in humidity-controlled facilities because outdoor storage makes equipment more susceptible to corrosion and increases maintenance requirements and costs. One Army study showed that sheltering equipment in a humidity-controlled facility had a return on investment, at minimum, of $8 for every $1 invested. In South Korea, the Army has recently completed an intensive effort to repair prepositioned assets and correct some long-standing problems, but almost one-third of the assets continue to be stored outside. Similarly, as the Army reconstitutes its prepositioned equipment in Southwest Asia, thousands of Army equipment items in Kuwait are stored outdoors in harsh environmental conditions. Army officials cited competing funding priorities and other factors as reasons for not providing indoor storage for all land-based prepositioned assets. However, temporary shelters may be a feasible option to address immediate storage needs. The Army has used temporary shelters and humidity-controlled storage for some prepositioned assets. Although the Army requires corrosion-related data collection for equipment items and Marine Corps officials believe them to be beneficial, data that could help reduce corrosion of prepositioned assets are not available. They are not available because the services consider this information to be a low priority and do not systematically collect it. Without these data, the services are not in a position to identify causes of corrosion, support efforts to more effectively reduce corrosion, and achieve long-term cost savings. Army and Marine Corps documents include information on the maintenance condition, actions, and costs for prepositioned equipment, but provide little data on corrosion. While cost data are limited, the services have estimated that about 25 percent of overall equipment maintenance costs are corrosion related and perhaps as much as one-third of these costs could be reduced through more effective corrosion prevention and mitigation. An Army review of maintenance records for about 2,000 pieces of prepositioned stock in South Korea found that $8.7 million (31 percent) of the estimated $28 million spent to restore this equipment was used to address corrosion. The Army has had previous success using corrosion data on non-prepositioned equipment programs to support corrosion prevention and mitigation.
This section describes (1) the operation and regulation of the electricity system and (2) advances in technologies available to customers that allow them to generate, store, and manage their consumption of electricity. The electricity system involves four distinct functions: electricity generation, electricity transmission, electricity distribution, and grid operations. As shown in figure 1, electricity is generated at power plants, from which it flows over high-voltage, long-distance transmission lines to transformers that convert it to a lower voltage to be sent through a local distribution system for use by residential and other customers. Because electricity is not typically stored in large quantities, grid operators constantly balance the generation and consumption of electricity to maintain reliability. In addition, electricity suppliers sell electricity to residential and other customers. Continuously balancing the generation and consumption of electricity can be challenging for grid operators because customers may use sharply different amounts of electricity over the course of a day and throughout the year. For example, in many areas, customer demand for electricity rises throughout the day and reaches its highest point, or peak demand, in late afternoon or early evening (see figure 2 for an example of how demand changes throughout the day). As we noted in a prior report, throughout the day, grid operators direct power plants to adjust their output to match changes in demand for electricity. Grid operators typically first use electricity produced by the power plants that are least expensive to operate; operators then increase the use of electricity generated by more expensive power plants, as needed to match increases in electricity demand. As a result, providing electricity to meet peak demand is generally more expensive than during other parts of the day, because to do so, grid operators use power plants that are more expensive to operate. In general, grid operators perform planning to ensure that grid infrastructure has sufficient capacity—the maximum capability in megawatts to generate and transmit electricity—to meet future peak demand, as we found in our review of reports from DOE and industry sources. To accomplish this, grid operators typically develop forecasts of future electricity demand based on historical information about customer electricity use combined with assumptions about how customer demand will change in the future based on population growth, economic conditions, and other factors. In general, grid operators assess the adequacy of existing grid infrastructure, identify any capacity needs, and evaluate the cost and effectiveness of potential solutions to address these needs. Potential solutions could include building a power plant to generate additional electricity, building a new transmission line to transport electricity to an area with estimated high future electricity demand, or implementing a program to encourage customers in a high- demand area to use less electricity. Responsibility for regulating the electricity industry is divided between the states and the federal government. Most customers purchase electricity through retail markets. State regulators, often called public utility commissions, generally oversee these markets and the prices retail customers pay for electricity (see sidebar). Before electricity is sold to retail customers, it may be bought, sold, and traded in wholesale electricity markets. The Federal Energy Regulatory Commission (FERC), which oversees wholesale electricity markets, among other things, has statutory responsibility for ensuring that wholesale electricity prices are just and reasonable, and not unduly discriminatory or preferential. Technological innovation in recent years has led to the development and increased availability of new and more advanced technologies that can be deployed where customers live and that give customers greater control over how they use electricity. These technologies can be deployed by customers, electricity suppliers, or third-party providers—independent entities that sell specific products and services to customers. Figure 3 illustrates the deployment of these technologies at a residence. Distributed generation systems. Distributed generation systems are relatively small-capacity electricity generation systems, such as those installed at residences or other customer locations throughout the grid, generally at or near the site where the electricity will be used. A common type of distributed generation system is a solar system, such as solar photovoltaic panels installed on a roof. Solar systems installed at a customer’s location allow the customer to generate electricity for their own use and send excess electricity to the grid that electricity suppliers can use to meet other customers’ electricity needs. Solar systems utilize inverters—devices installed with the system to convert the electricity it generates into a form usable by the customer and the grid. Advanced meters and associated infrastructure. Advanced or “smart” meters are deployed at residences and other customers’ locations by grid operators to allow them to collect data on customers’ electricity use at more frequent intervals than is possible with traditional meters. Advanced meters are integrated with communications networks to transmit meter data to grid operators. This integration can reduce or eliminate the need for grid operator personnel to read meters at a customer’s location. Data management systems store, process, and receive data from advanced meters. Certain advanced meters can be enabled to communicate directly with customers’ smart devices by sending information, such as electricity prices, that these devices use to take actions such as reducing consumption. Distributed storage systems. Distributed storage systems—such as batteries located at homes—allow customers to store electricity from the grid or from a distributed generation system for use at a later time. For example, customers with distributed storage systems may store electricity produced from an on-site solar system during the day when their electricity consumption is lower than the amount of electricity the solar system produces. Customers may then use this stored electricity generated by the solar system later in the day, for example, during peak demand periods. Electricity management technologies. Smart devices—such as smart thermostats, smart appliances, and electric vehicles with automated charging controls—contain electronics capable of automatically adjusting electricity consumption. Smart devices may be controlled by energy management systems that allow customers to automate control of their devices and respond to changing grid conditions. For example, customers could choose to program their electric vehicle to charge at night to avoid charging during peak demand periods. Federal and state policymakers have used a variety of policies to encourage deployment of solar systems, advanced meters, and other residential electricity storage and management technologies. Specifically, policymakers have used (1) federal financial incentives and state electricity policies to encourage residential deployment of solar systems; (2) federal grants and state requirements to encourage residential deployment of advanced meters; and (3) federal and state financial incentives and state deployment targets to encourage residential deployment of electricity storage and management technologies. Federal and state policymakers have used financial incentives and other policies to encourage residential deployment of solar systems. Many of these policies shorten the expected payback period of solar systems—the period of time it takes customers to realize savings equal to the cost of installation. These policies typically do so by reducing the up-front costs of installation or by increasing the level of expected savings from the system. At the federal level, the government has established tax incentives encouraging residential deployment of solar systems. For example, the Investment Tax Credit provides a tax credit, equal to 30 percent of the cost of installing a solar system, to the owner—either a customer that owns the system or third-party provider that installs and owns the system on behalf of a customer. This tax credit can generally be claimed in full in the tax year during which the system is completed, allowing the customer to immediately offset a large portion of the installation costs. Data are not available on the amount of revenue losses attributable to the use of the federal tax credits for residential solar systems. However, the federal government is expected to forgo billions of dollars in tax revenue as a result of individuals and corporations claiming federal tax credits for installing solar systems and other renewable energy technologies. At the state level, policymakers have used several types of policies to encourage residential deployment of solar systems. Net metering policies. Net metering policies implemented by state regulators generally require electricity suppliers to offer net metering programs that credit customers for the electricity they send to the grid from their solar or other distributed generation system. As of July 2016, 41 states had established policies that require electricity suppliers to offer net metering programs to electricity customers, according to the Database of State Incentives for Renewables and Efficiency. Under net metering programs, electricity suppliers generally subtract the amount of electricity customers send to the grid from the amount of electricity they purchase from the grid to determine the net amount of electricity for which customers are billed. This reduces or in some cases eliminates customers’ payments for electricity they obtain from the grid. Net metering programs can provide more value for customers who pay high electricity prices because these programs often credit customers for electricity they send to the grid at the same retail price they pay to purchase electricity from the grid. For example, a customer facing an electricity price of 23 cents per kWh would receive a $23 bill credit for each 100 kWh increment of electricity sent to the grid, whereas a customer facing a price of 10 cents per kWh would be credited $10 for sending the same amount of electricity to the grid. Policies allowing third-party-owned solar systems. Some states have adopted policies that make it possible for third-party providers to install solar systems, which the providers own and operate, on residential customers’ private homes—thereby allowing solar systems to be deployed on the homes of customers who could not otherwise pay the up-front costs of installing the systems. Under these arrangements, the third-party provider pays to install the solar system, and the customer agrees to buy electricity generated by the solar system or to make lease payments on the system to the third-party provider. As of July 2016, at least 26 states authorized third-party providers to enter into agreements to sell electricity in this way to homeowners, according to the Database of State Incentives for Renewables and Efficiency. Third-party providers may be able to obtain federal tax incentives that are not available to individual homeowners, in addition to claiming the federal Investment Tax Credit. These tax benefits may be passed through, in part, to residential customers in the form of lower prices for electricity purchased from the solar system or more favorable system lease terms. Incentives. State financial incentives encourage the residential deployment of solar systems by offsetting some of the up-front cost of deploying the systems. For example, as of October 2016, 14 states provided personal tax credits for installing solar systems, according to the Database of State Incentives for Renewables and Efficiency. In addition to tax incentives, states can provide grants, loans, or other financial support to individuals who install solar systems. For example, the California Public Utilities Commission reported that from 2007 to 2016, the California Solar Initiative made available more than $2 billion for a program that funded rebates to residential and other customers who installed solar systems. Other policies. States have established several other types of policies to encourage the deployment of residential solar systems, as we found in our review of stakeholder reports and the Database of State Incentives for Renewables and Efficiency. These policies are described in table 1. Federal policymakers have used grants, and state policymakers have used requirements and regulatory decisions, to encourage residential deployment of advanced meters. At the federal level, DOE reported that it provided more than $3.4 billion in grants through its Smart Grid Investment Grant program from 2009 through 2015 for upgrades to the electricity grid, including the deployment of advanced meters. In addition, DOE provided about $600 million through its Smart Grid Demonstration Program for demonstration projects that involved innovative applications of existing and emerging grid technologies and concepts, which supported some additional advanced meter deployments. At the state level, policymakers in some states have enacted policies that require regulated grid operators to deploy advanced meters at residences and other customer locations or to file deployment plans with regulators, according to an EIA analysis. For example, the California Public Utilities Commission established a requirement that the three grid operators it regulates install advanced meters at customers’ residences; the commission also authorized the operators to recover associated deployment costs through increased retail electricity prices. In addition, DOE officials told us that regulators in some states have approved proposals from the grid operators they regulate to install advanced meters and recover the costs of installation from customers. However, some state regulators do not have jurisdiction over all grid operators in their state, such as those that are municipally owned; as a result, state regulatory policies may not affect the deployment of advanced meters by all grid operators in a state. Federal and state policymakers have used financial incentives, and state policymakers have used deployment targets, to encourage residential deployment of electricity storage and management technologies. The federal government has provided incentives to promote these technologies, though federal support for these technologies has been more limited than federal support for advanced meters and solar systems. For example, DOE funding through both the Smart Grid Investment Grant program and Smart Grid Demonstration Program provided some support for the installation of smart devices, including thermostats that can receive price and other data from electricity suppliers. Furthermore, customers who install residential storage systems potentially are eligible for the Investment Tax Credit when they use the storage system to store energy from their solar system; however, there is no federal tax incentive for stand-alone storage systems. Additionally, customers who purchase a qualifying electric vehicle potentially can receive a federal tax credit of $2,500 plus an additional credit depending on the size of the vehicle’s battery. Electric vehicles are primarily used for transportation, but potentially can also be used as an electricity storage and management technology. At the state level, we identified examples of state policies that have encouraged the deployment of electricity storage and management technologies, based on interviews with stakeholders and our review of state documentation. For example, in 2013, the California Public Utilities Commission set targets for the electricity suppliers that it regulates to procure about 1.3 gigawatts of storage capacity by 2020; this includes procuring capacity from distributed storage systems installed by residential and other customers. In 2012, California’s governor issued an executive order focused on reducing greenhouse gas emissions that established a target of having more than 1.5 million zero-emission vehicles—including electric vehicles—on the road in the state by 2025. In addition, the New York State Energy Research and Development Authority partnered with an electricity supplier in the state to provide residential and other customers with a financial incentive of $2.10 per watt for distributed storage systems installed and operational before June 1, 2016 in order to reduce the electricity supplier’s peak demand. The deployment of solar systems and advanced meters has increased, especially in some states, but other technologies have not been as widely deployed. Specifically, our analysis of EIA data indicated that the deployment of residential solar systems has increased significantly in some states, but residential solar systems account for a small portion of nationwide electricity generation. Additionally, our analysis of EIA data indicated that advanced meters have been widely deployed among residential customers in some states, and the use of advanced meters has increased nationwide. However, available information suggests that residential electricity storage and management technologies have not been widely deployed. Residential customers increasingly deployed solar systems from 2010 through 2015. Specifically, the total number of residential electricity customers with solar systems increased sevenfold over this period, according to EIA data. Despite this significant increase, our analysis of EIA data found that only about 0.7 percent of all residential customers nationwide had installed solar systems in 2015. In addition, residential solar generation was low overall, accounting for approximately 0.1 percent of nationwide electricity generation, according to our analysis of EIA estimates. At the state level, every state experienced increases in the number of residential customers with solar systems from 2010 through 2015, but certain states accounted for most of the growth, according to EIA data. For example, during this period, more residential customers in California installed solar systems than customers in any other state. Furthermore, California, together with nine other states, accounted for nearly all of the increase in the number of customers with solar systems. The three states with the highest proportion of residential customers with solar systems in 2015 were Hawaii, with more than 14 percent; California, with 4 percent; and Arizona, with 3 percent. These three states had state or local policies that encouraged the installation of solar systems. These policies included net metering policies and policies that allow residential customers to purchase power from third parties that install solar systems on customers’ roofs, among others. Figure 4 shows data from EIA on the number of customers who installed solar systems and the systems’ total electricity generation capacity from 2010 through 2015. In addition to identifying the aforementioned federal and state policies to encourage the deployment of residential solar systems, we identified through our review of reports and discussions with stakeholders that other factors—such as increased efficiency, declining system costs, and high electricity prices—also contributed to the deployment of residential solar systems in some states. The efficiency of solar photovoltaic panels has substantially increased across a broad range of manufacturers and panel types over the past several decades, according to a National Renewable Energy Laboratory analysis. As a result, similarly sized panels can produce more electricity, improving the cost effectiveness of systems because customers may need to purchase and install fewer panels to achieve a desired amount of electricity generation. Additionally, recent decreases in the costs of solar systems have made them more economical for residential customers nationwide. DOE’s Lawrence Berkeley National Laboratory reported that the national median price to install a residential solar system has decreased since 1998, with the most rapid declines occurring after 2009. (See fig. 5 below.) Based on these data, a 6-kilowatt residential solar system that would have cost about $51,000 in 2009 cost approximately $25,000 in 2015. High electricity prices in some states also contributed to increasing solar system deployment, according to several stakeholders we interviewed, including electricity suppliers. Solar systems produce savings for customers when the cost of the electricity that the systems generate is lower than the cost of electricity that customers would otherwise have purchased from the grid. Generally, the higher the retail price for grid electricity, the more likely it is that a solar system will be cost effective for customers, according to stakeholders we interviewed. For example, in 2015, Hawaii had the highest retail electricity price for residential customers—29.6 cents per kWh, compared to the national average of 12.7 cents per kWh. Hawaii also had the highest proportion of customers that deployed residential solar systems, with more than 14 percent of residential customers having systems by the end of 2015. Grid operators widely deployed advanced meters among residential customers in some states from 2007 through 2015, according to EIA data. Nationwide, according to EIA data, the number of advanced meters installed at residences grew 26-fold in recent years, from 2 million meters in 2007 to 57 million meters (43 percent of all residential meters) in 2015. However, as of 2015, levels of advanced meter deployment at residences varied substantially by state, as shown in figure 6. Some states that have experienced widespread deployment of advanced meters—such as California, Maine, and Vermont—had established policies requiring or encouraging grid operators to deploy meters. For example, in California, about 99 percent of the residential meters installed by the state’s three regulated grid operators were advanced meters, as of 2015, according to EIA data. Other states, such as New York and Rhode Island, had virtually no advanced meter deployment. In addition to the aforementioned federal and state policies to encourage the deployment of advanced meters, stakeholders we interviewed told us that the economic benefits of installing meters also contributed to their deployment. For example, advanced meters allow electricity suppliers to use fewer personnel and other resources for on-site meter reading. In one case, representatives from a rural cooperative electricity supplier in Arizona said that they began installing some form of advanced meters about 15 years ago, largely because on-site meter readings for their dispersed customer base were time consuming and costly. These representatives said that all of their approximately 40,000 customers now have advanced meters. Residential electricity storage and management technologies have not been widely deployed, according to electricity suppliers we interviewed and available data from EIA; however, comprehensive data on their deployment were not available. Residential deployment of distributed storage systems, such as battery storage systems, has been limited, according to representatives from several electricity suppliers we interviewed, although comprehensive data on the deployment of these systems were not available. For example, representatives from one electricity supplier we interviewed said that as of May 2016, there were 72 customers with residential storage systems in their service territory of more than 1 million customers. Similarly, residential deployment of other technologies that can manage electricity consumption—such as smart thermostats, smart appliances, and electric vehicles—is limited, according to several electricity suppliers we interviewed. Comprehensive data are not available on the extent to which residential customers have deployed these electricity management technologies. However, available data from EIA indicate that certain electricity management technologies are becoming increasingly available. Specifically, the number of electric vehicles available for sale has increased from almost none in 2010 to more than 90,000 in 2014, according to EIA data. Several stakeholders told us that several factors have kept the deployment of these technologies low. These factors include high up-front costs for technologies such as distributed storage systems, which can cost a few thousand dollars per system. In addition, as discussed later in this report, customers may have limited opportunities to receive electricity bill savings that offset these up-front costs. Solar systems, advanced meters, and electricity storage and management technologies could increase the efficiency of grid operations, but the increasing deployment of residential solar systems has begun to pose challenges for grid management in some areas. Policymakers have implemented or are considering measures to maximize the potential benefits and mitigate the potential challenges associated with the increasing deployment of these technologies. Solar systems, advanced meters, and electricity storage and management technologies have the potential to lead to more efficient grid operations by enabling individual customers to generate, store, and manage their consumption of electricity in response to conditions on the grid, as we found in our analysis of reports and stakeholder interviews. For example, the supply of electricity must constantly be balanced with demand for electricity, and customers can use these technologies to decrease individual consumption of electricity from the grid when demand is high and increase consumption when demand is low. More efficient grid operations can reduce the cost of producing electricity and reduce the need for investments in additional generation, transmission, and distribution infrastructure, according to several reports we reviewed. Some of these cost savings can result in lower consumer prices. These technologies can provide additional benefits, such as potentially reducing greenhouse gas and other harmful emissions. Several grid operators we interviewed identified various factors that could affect the extent to which these benefits are realized, including where technologies are located, how they are operated, and variations in conditions on the grid, among others. Below we highlight several potential benefits associated with solar systems, advanced meters, and other electricity storage and management technologies identified in our review of reports and interviews with stakeholders: Solar systems. Several reports we reviewed and stakeholders we interviewed identified examples of how residential solar systems can help make grid operations more efficient. For example, in some locations, electricity generated by solar systems can reduce peak demand for electricity from the grid, which can lower electricity costs. Additionally, because these systems generate electricity near the point where it is consumed, they can potentially reduce how much electricity grid operators have to transmit to customers, which can help defer the need to upgrade distribution or transmission lines, thus avoiding potential cost increases for customers. In addition, grid operators and third-party providers told us that improvements in solar system technologies, such as advanced inverters, may create additional ways for solar systems to increase the efficiency of grid operations. Advanced inverters have a variety of potentially useful functions, including the ability to adjust a solar system’s electricity output. Grid operators can use these functions to help balance moment-to-moment changes in electricity demand. In addition, according to several reports we reviewed, solar systems generate electricity without producing greenhouse gas emissions or other harmful pollutants, and electricity generated by these systems can offset the need for electricity generated by power plants that emit these pollutants. Advanced meters. Advanced meters can improve the efficiency of grid operations by providing grid operators with better information on grid conditions and by enabling customers to manage their generation, storage, and consumption of electricity in ways that align with grid conditions, as we found in examples provided during interviews with several stakeholders and in reports we reviewed. According to DOE officials, advanced meters can provide more detailed information about grid conditions by, for example, notifying grid operators when individual customers have lost electricity service. This information helps grid operators identify and remedy outages in a more timely manner. In addition, advanced meter data collection and communications capabilities, when enabled, can help customers better manage their electricity consumption in ways that align with grid conditions, such as by reducing electricity consumption during peak demand periods. Specifically, advanced meters measure customer electricity consumption data at shorter intervals than traditional meters, and this more detailed information can help customers better understand and adjust their electricity consumption patterns. Additionally, certain advanced meters can communicate information on grid conditions (e.g. periods of high demand) directly to smart devices that can automatically modify their electricity consumption (e.g. by reducing consumption during these periods of high demand). Electricity storage and management technologies. Technologies that enable customers to store electricity and manage their electricity consumption could help improve the efficiency of grid operations, among other benefits, according to reports we reviewed and stakeholders we interviewed. In particular, customers could use technologies, such as smart thermostats and possibly electric vehicles, to modify their electricity consumption in response to the overall demand for electricity from the grid. For example, a customer could program a smart thermostat to reduce electricity consumption when demand for electricity is high. Likewise, storage systems can store electricity generated at times of low demand for use when demand is high. These systems also can provide other benefits to individual customers, such as giving customers a temporary source of backup electricity in the event of an outage. Using multiple residential technologies in combination increases the potential to improve the efficiency of grid operations, according to stakeholders we interviewed and reports we reviewed. For example, while a solar system could reduce demand during peak periods, several grid operators we interviewed told us that peak demand in their service areas occurs in the evening, when solar systems generate little or no electricity. However, a solar system combined with a storage system could store electricity generated during the day for use in the evening, when the demand for electricity is high. In addition, smart devices provide customers with the flexibility to shift their consumption to periods when a solar system is producing electricity so they can make full use of the electricity the system generates. Several grid operators we interviewed told us they have begun to experience grid management and other challenges in some areas as deployment of residential solar systems increases, but they said these challenges generally have been manageable because overall deployment of these systems has been low. Several stakeholders we interviewed identified various factors that could affect the extent to which these challenges occur, including where solar systems are located, how they are operated, and variations in conditions on the grid, among others. Several stakeholders also identified similar challenges potentially posed by other residential technologies, although these technologies have not been widely deployed. The challenges we identified in our analysis of reports we reviewed and in the views of stakeholders and federal officials we interviewed include the following: Limited information. Several grid operators we interviewed told us that they typically only have information on customers’ net consumption of electricity from the grid, and they generally do not know (1) how much electricity is being generated by a customer’s residential solar system or (2) how much total electricity is being consumed at a customer’s location. According to these grid operators, such information is important to effectively manage grid operations and meet customers’ total electricity needs at all times. For instance, during periods when a solar system is not producing electricity, such as when clouds or snow prevent sunlight from reaching solar panels, customers may be forced to shift from relying on their solar system to relying on the grid to meet their total electricity needs. For areas with high deployment of residential solar systems, this lack of information can contribute to uncertainty about how best to prepare for and respond to changes in electricity demand, which can, in turn, result in higher costs for customers, as we found in our review of reports. For example, grid operators may need to pay for a greater number of flexible, fast-starting power plants to be on standby to account for changes in demand. If operators had better information, they might not need to have as many plants on standby. Having a greater number of fast-starting plants on standby can raise operating costs, which operators can pass on to all customers in the form of higher electricity bills. Representatives from the transmission system operator in California told us that they also lack information about the electricity usage patterns of electricity storage and management technologies, such as storage systems and customer smart devices. This lack of information could further complicate grid operation and planning if the technologies are added to the grid in increasing numbers. Limited control. Grid operators generally do not control where residential solar systems are installed or how much electricity these systems produce and when. The installation of solar systems is generally based on customers’ preferences, while the amount of electricity that solar systems generate is generally based on the amount of usable sunlight available to the systems. In contrast, grid operators generally control the level of electricity generated by power plants and, in many regions, plan for the types of power plants that are built and where they are located. According to several reports we reviewed, the lack of grid operators’ control over solar systems’ output can present challenges to these operators and result in additional costs if solar systems’ locations and electricity output do not align with grid conditions. For example, according to representatives from a grid operator in Hawaii, high concentrations of residential solar systems in some neighborhoods sent more electricity to the grid than the distribution infrastructure in those neighborhoods was designed to accommodate. These representatives said that this resulted in the need to upgrade the distribution infrastructure to increase the amount of electricity it could accommodate from solar systems; these upgrades in turn resulted in additional costs for customers. In addition, according to representatives from two grid operators we interviewed, a lack of control over rooftop solar systems has, in some circumstances, resulted in the operators reducing the amount of electricity generated by large, renewable power plants under their control, even though electricity from these larger renewable power plants is less expensive to procure than the electricity that grid operators purchase from residential solar systems. Furthermore, the location and operation of other residential technologies—such as storage systems and smart devices—are not controlled by grid operators. Based on our review of several reports, these technologies could mitigate or exacerbate operational challenges depending on how well their use aligns with grid conditions. Lower revenues for electricity suppliers. Under the traditional business model, electricity suppliers earn revenue when they sell electricity to customers. Customers who install solar systems use less electricity from the grid, and this decline in usage can reduce electricity supplier revenues, according to several reports we reviewed. In addition, net metering policies under which electricity suppliers credit customers for the electricity these customers send to the grid can reduce electricity supplier revenues. Lower consumption of electricity from the grid may produce some cost savings for suppliers (e.g. reduced fuel consumption). However, several electricity suppliers we interviewed told us that many of their costs—such as costs associated with investments they previously made to build and maintain power plants and transmission and distribution lines—are fixed in the short term and will not decline even if solar customers use less electricity from the grid. To the extent that reduced electricity supplier revenues exceed any cost savings from customers’ use of solar systems, suppliers may collect insufficient revenues to cover the costs of operating and maintaining the grid, and they may earn a lower financial return, as we found in our review of reports from DOE national laboratories and other stakeholders. Our review of these sources also found electricity storage and management technologies could exacerbate challenges related to lower revenues, for example, if storage systems facilitate further reductions in customers’ use of electricity from the grid. According to several reports we reviewed from multiple sources, the greater use of residential storage and electricity management technologies—particularly storage systems— in combination with significantly expanded deployment of solar systems, could lead to a cycle of reduced electricity consumption, declining supplier revenues, and increasing electricity prices, potentially creating long-term financial challenges for electricity suppliers. Cost shifts among customers. If electricity suppliers collect revenues that are insufficient to cover the costs of operating and maintaining the grid, as a result of lower electricity consumption from customers who have solar systems, some of these costs could be shifted to non-solar customers, according to several reports we reviewed and stakeholders we interviewed. Several electricity suppliers we interviewed expressed concern about cost shifts. Two suppliers told us that while cost shifts had been negligible with low levels of deployment, increasing deployment has made cost shifts more significant. According to an electricity supplier in Arizona, as reported in a 2013 filing to the state regulator, an average of $1,000 in costs per net-metered solar system per year were shifted from residential customers with net-metered solar systems to customers without such systems. This resulted in a shift of an estimated $18 million in total annual costs. Another Arizona electricity supplier told us that in its service territory, costs were often shifted from wealthier customers, who could afford to install residential solar systems, to lower-income customers, such as customers on tribal reservations. Nevertheless, according to several reports we reviewed, solar systems can provide benefits to the grid and society, as well as result in financial savings for other customers. Recent estimates of the specific costs and benefits of solar systems have varied widely, according to a DOE national laboratory report we reviewed; these variations have led to differing estimates of the potential cost shifts some customers may face. Increasing complexity of electricity industry oversight. The increasing deployment of solar systems may increase the complexity of overseeing the electricity industry, according to our analysis of reports and the views of stakeholders we interviewed. For example, increases in residential customers’ deployment of solar systems may affect electricity transmission system operations. Residential solar systems, when installed in large enough numbers within a geographic area, can generate electricity that moves onto the transmission system, according to two reports we reviewed. In areas with high solar system deployment, grid reliability problems could cause a large number of these systems to disconnect from the grid at the same time. Such an occurrence could, in turn, cause a rapid drop in the amount of electricity being sent to the grid from these systems and make it challenging for transmission grid operators to maintain the reliable operation of the grid. The installation of residential solar systems is subject to state oversight, while the reliability of the transmission system is subject to FERC oversight. This may complicate oversight and operation of the transmission system as additional solar and other residential technologies are added to the grid. In addition, once solar systems are installed, FERC-regulated transmission system operators generally do not have information and control over the solar systems’ operation. Representatives from DOE told us that electricity storage and management technologies also increase the complexity of electricity industry oversight. Policymakers in some states and the federal government are considering measures designed to maximize the potential benefits of advanced meters, solar systems, and residential electricity storage and management technologies, while mitigating the potential challenges, based on our analysis of reports and the views of stakeholders we interviewed. For example, based on our review, policymakers are considering measures in several key areas: Prices for electricity purchased from the grid. Policymakers in several states have implemented or are considering measures to change how customers pay for electricity in order to increase the efficiency of grid operations and provide electricity suppliers with sufficient revenues to maintain the grid. For example, time-based prices—prices that vary throughout the day with demand—can be used to encourage customers to manage their electricity consumption in a way that aligns with conditions on the grid. Specifically, time- based prices are higher when demand for electricity is high and lower when demand for electricity is low, which can encourage customers to shift their electricity consumption from high to low demand times (see sidebar on the following page). However, in 2004, we found that most customers faced unchanging electricity prices, which limited their incentive to respond to changing grid conditions. According to EIA data, as of 2015, only five percent of residential electricity customers nationwide paid time-based electricity prices. Several state regulators recently have allowed electricity suppliers to adopt voluntary time- based prices, and regulators in other states are considering this approach. In addition to time-based prices, policymakers in several states have adopted policies that periodically and automatically adjust customers’ electricity prices to ensure that electricity suppliers earn sufficient revenue to cover the costs of operating and maintaining the grid and that they earn a rate of return allowed by state regulators. These policies can make electricity suppliers less dependent on selling a specific amount of electricity, because the suppliers earn the same amount of revenue regardless of how much electricity they sell. However, according to two electricity suppliers and a state regulator we interviewed, while such policies help ensure electricity suppliers receive sufficient revenue even as solar systems reduce the amount of electricity that customers purchase, they do not necessarily address concerns about cost shifts among customers. Retail electricity prices historically have been designed to reflect the average cost of serving customers (see dotted line above) for an extended period, up to a year or more, as we found in past GAO work and stakeholder reports. However, as we previously reported, prices also can be designed to vary with the cost of serving retail customers. These time- based prices can be designed in different ways to align with grid conditions, such as in the time-of-use pricing plan illustrated above. With time-based prices, customers can achieve bill savings by shifting their electricity use from high-cost times to low-cost times. For example, a storage system could be used to store electricity from the grid during times when prices are lower and discharge the stored electricity when prices are higher. Similarly, smart appliances could be programmed to operate during periods when electricity costs are relatively low. Compensation for electricity sent to the grid. In order to mitigate challenges related to reduced electricity supplier revenues and cost shifts among customers, among other challenges, policymakers in several states have begun to implement or are considering measures to change how customers are compensated for the electricity they generate and send to the grid. In making this determination, policymakers have considered the benefits that solar systems and other technologies provide as well as any costs that result from the installation of solar systems, among other factors. In October 2015, in Hawaii—a state with high deployment of solar systems and high retail electricity prices—the Hawaii Public Utility Commission closed the state’s existing net metering program to new participants and established options for new solar systems, including reducing the price customers would be paid for the electricity they send to the grid. The Commission stated that this would allow the state’s electricity suppliers to procure electricity in a more cost-effective manner and reduce electricity costs for all customers. Policymakers in other states have made different decisions about whether and how to modify compensation for electricity sent to the grid based on their assessment of the benefits and costs of distributed solar systems in their states. For example, in California, state regulators made changes to the state’s net metering policy that will compensate new solar customers for the electricity they send to the grid at a price that varies throughout the day based on overall customer electricity demand. Grid planning. Policymakers in several states are beginning to implement or are considering measures to incorporate solar and electricity storage and management technologies into grid planning. In particular, state regulators in California and New York have developed policies requiring regulated electricity suppliers in their states to analyze the grid and identify areas where customer deployment of solar systems and electricity storage and management technologies could provide the greatest benefit, given local grid conditions. Locating combined solar and storage systems in areas where peak demand is projected to exceed the grid’s capacity to transmit electricity to customers generally would be more beneficial than locating such a system in an area with ample grid capacity, as we found in our review of reports. For the long term, regulators in New York are considering how prices for electricity could be modified to encourage customers to locate and operate technologies in a way that is most beneficial for grid operation, given conditions throughout the grid, according to state documents we reviewed. Furthermore, according to one stakeholder report we reviewed, electricity prices could be modified to vary by both time and location to provide customers with an economic signal about variations in grid conditions. In addition, some regional transmission operators are beginning to incorporate into their grid planning processes estimates of the future deployment of solar systems, in an effort to identify the extent to which these systems will reduce future demand for electricity from the grid. Technology and data solutions. Policymakers at the state and federal levels are considering measures to mitigate challenges associated with grid operators’ limited information about and lack of control over solar systems as well as to facilitate the greater use of data from advanced meters. For example, efforts are ongoing to develop industry standards to facilitate the development and use of advanced inverters that could provide grid operators with more information about solar systems’ electricity generation as well as provide them with some control over these systems’ electricity output. Some states—such as California and Hawaii—have begun to develop policies to use advanced inverters for future solar systems that are connected to the grid. Additionally, in 2012, DOE helped launch the Green Button initiative to encourage grid operators to provide customers with electricity usage data from advanced meters in a standardized format. Such standardized data formats allow third- party providers to more easily develop products—such as electricity management software that can control smart devices—and help customers manage their electricity consumption in ways that align with conditions on the grid, according to our review of reports. The role of the electricity supplier. Policymakers in some states are considering whether the increasing use of solar systems and electricity storage and management technologies necessitate policy changes to ensure electricity suppliers remain financially viable and able to support the reliable operation of the grid. Specifically, some policymakers are considering changes to how electricity suppliers operate and generate revenue, including developing new sources of revenue. For example, according to a publication from the New York State Energy Planning Board, the current business model for electricity suppliers needs reform to ensure electricity suppliers can accommodate and adapt to greater deployment of solar systems and electricity storage and management technologies. As one component of a broad strategy of energy reforms, the New York State Department of Public Service has approved several demonstration projects to identify, among other things, new revenue sources for electricity suppliers. For example, one project involves an electricity supplier in the state administering a website that provides customers with electricity management information and access to third-party providers that sell electricity management products and services. Among other things, this project will evaluate various new sources of revenue for electricity suppliers, such as earning a percentage of revenues from sales of products and services made through this website. Regulatory coordination. Developing some measures to maximize the benefits and mitigate the challenges associated with the increasing deployment of advanced meters, solar systems, and electricity storage and management technologies may require coordination between federal and state regulators, as well as others, based on our review of information in reports and the views of stakeholders we interviewed. For example, as we noted in a 2004 report, the actions customers take in response to retail electricity prices can affect the electricity markets under FERC jurisdiction. In recommendations we made in 2004, we emphasized that FERC should continue to coordinate with states and other industry stakeholders to develop complementary policies related to electricity prices. Furthermore, according to DOE officials, among other things, it may be increasingly necessary to integrate electricity distribution and transmission system planning processes and for grid operators and regulators to collaborate to ensure that such technologies do not adversely affect the reliable operation of the transmission system. FERC officials we interviewed agreed that some opportunities exist for FERC and the states to collaborate as technology deployment increases, and they told us that FERC has some mechanisms to achieve such collaboration. Specifically, FERC officials told us that FERC collaborates with the states on issues of emerging interest in a variety of formal and informal settings. We provided a draft copy of this report to DOE and FERC for review and comment. DOE and FERC did not provide written comments or indicate their agreement or disagreement with our findings but provided technical comments, which we incorporated as appropriate. As agreed upon with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Energy, the Chairman of FERC, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report examines technologies available to residential customers to generate, store, and manage their consumption of electricity. These technologies include distributed generation systems (e.g. solar systems), advanced meters, distributed storage systems, and electricity management technologies (e.g. electric vehicles and smart devices). Our objectives were to describe (1) key federal and state policies used to encourage the deployment of these technologies, (2) the extent to which these technologies are being deployed, and (3) the benefits and challenges of deploying these technologies. To address all three of these objectives, we reviewed reports and other documentation, as well as interviewed stakeholders. We identified relevant reports by conducting database and web searches and through suggestions from the stakeholders we interviewed. Specifically, we searched sources including Proquest, Inspec, SciSearch, among others, and the websites of national laboratories and organizations focused on electricity industry research. We selected 20 reports for in-depth review based on their relevance to the residential sector, a focus on commercially available technologies included in our scope, and the source of the report, including the perspective represented by that source. We selected studies from academics and research institutions, such as the Department of Energy’s (DOE) national laboratories, as well as studies from industry and other stakeholder groups representing different views. These reports were published from 2013 through 2016. The team also reviewed other documentation, including state regulatory filings made by electricity suppliers, key policy decisions by state and federal regulators, and reports on specific topics relevant to our work. In addition, we interviewed officials and representatives from 46 government agencies and stakeholder groups. In particular, we interviewed federal officials from DOE, the National Renewable Energy Laboratory, the Lawrence Berkeley National Laboratory, the Federal Energy Regulatory Commission, the Department of the Treasury, and the Internal Revenue Service. Furthermore, we selected stakeholders that provided single-state, regional, and national perspectives based on their experience with the deployment and use of relevant technologies and with policies related to these technologies, and the extent to which the group represented a diversity of perspectives. To select stakeholders representing single-state perspectives, we used EIA data to identify states that had high deployments of relevant technologies and reviewed state policymaking activity related to these technologies. We selected a non-generalizable sample of five states that have been actively addressing issues related to these technologies: Arizona, California, Hawaii, Minnesota, and New York. We interviewed state regulators and at least one electricity supplier in each state, and, in some cases, additional stakeholders such as state energy departments and consumer advocates. We identified additional stakeholders representing multi-state perspectives through our research, using our past work, and by considering suggestions from other stakeholders. We selected these additional stakeholders to represent different perspectives and experiences and to maintain balance with respect to stakeholders’ roles in the market. The stakeholders included industry associations, third-party providers (e.g. solar installers and software vendors), consumer advocacy organizations, academics, electricity suppliers, non-governmental organizations, and regional transmission organizations. Because this was a nonprobability sample of 46 government agencies and stakeholders, views are not generalizable to all potential government agencies and stakeholders. (For a list of stakeholders interviewed, see Appendix II). Throughout the report, we use the indefinite quantifier “several” when three or more stakeholder and literature sources combined supported a particular idea or statement. Our review of policies to encourage deployment focused on methods of direct policy support for the deployment of these technologies, as opposed to research and development activities. In addition, our review did not consider cybersecurity issues or standards for technology interoperability. To describe the deployment of technologies by residential customers to generate, store, and manage their consumption of electricity, we obtained and analyzed data from the Energy Information Administration’s (EIA) survey of electricity suppliers collected on EIA’s Form 861. Specifically, we merged data sets on advanced meters, net metering, demand response, distributed generation, dynamic pricing, and retail sales from 2007 through 2015. We calculated yearly totals for key variables, including: number of advanced meters, number of customers receiving daily access to electricity consumption data, number of residential customers with distributed generation under net metering agreements, and generating capacity of residential customers with distributed generation under net metering agreements, among others. In addition, we calculated percentages to determine the level of deployment, such as the percentage of advanced meters out of all meters and the percentage of residential customers with distributed generation, such as residential solar systems, among others. We analyzed these figures at the national, state, and electricity supplier levels for each year in which data were available. We also obtained and analyzed other data, including: 1) EIA 886 survey data on electric vehicles to determine trends of electric vehicles becoming available in the marketplace each year, 2) EIA estimates of average residential retail electricity prices by state, and 3) EIA estimates on national solar generation by sector. Some technologies, such as battery storage and smart devices, did not have readily available, comprehensive data on deployment. We took several steps to assess the reliability of EIA data. We reviewed relevant documentation, interviewed EIA representatives, reviewed the data for outliers, and addressed outliers through discussions with EIA representatives. In addition, we reviewed available documentation on the Database of State Incentives for Renewables and Efficiency and gathered additional information about the data-gathering practices from knowledgeable representatives at the North Carolina Clean Energy Technology Center, which maintains the database. We determined the data were sufficiently reliable for the purposes of this report. We conducted this performance audit from September 2015 to February 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Jon Ludwigson (Assistant Director), Eric Charles, Paige Gilbreath, and Miles Ingram made key contributions to this report. Important contributions were also made by Antoinette Capaccio, John Delicath, Cindy Gilbert, Michael Kendix, Gregory Marchand, MaryLynn Sergent, Maria Stattel, Sara Sullivan, and Barbara Timmerman.
Traditionally, electricity has moved in one direction—from electricity suppliers to customers. Today, solar systems allow electricity to be generated at a customer's home and sent to the grid for electricity suppliers to use to meet other customers' electricity needs. Storage systems allow residential customers to store electricity from the grid or their own solar system for use at a later time. Furthermore, customers can use smart devices, such as thermostats, to manage their electricity consumption. GAO was asked to provide information on the deployment and use of technologies that give customers the ability to generate, store, and manage electricity. This report describes (1) key federal and state policies used to encourage the deployment of these technologies, (2) the extent to which these technologies are being deployed, and (3) the benefits and challenges of deploying these technologies. GAO analyzed available data on technology deployment from EIA and reviewed relevant reports and regulatory documents. GAO interviewed a non-generalizable sample of 46 government agencies and stakeholder organizations. This sample included state regulators and at least one electricity supplier from each of five states: Arizona, California, Hawaii, Minnesota, and New York, which were selected based on state policies and having high levels of technology deployment. GAO is not making recommendations in this report. Federal and state policymakers have used a range of policies to encourage the deployment of solar systems and other technologies that allow residential customers to generate, store, and manage their electricity consumption. For example, federal tax incentives—such as the investment tax credit—have reduced customers' up-front costs of installing solar systems. In addition, a Department of Energy funded database of renewable energy incentives identifies 41 states with net metering policies that require electricity suppliers to credit customers for electricity sent from their solar systems to the grid, providing an additional incentive. Moreover, in 14 states, customers can also receive state tax credits for installing solar systems, according to the database, which further reduces the up-front costs. According to GAO's analysis of Energy Information Administration (EIA) data, deployment of solar systems has increased significantly in some states, with the total number of residential customers with solar systems increasing sevenfold from 2010 to 2015. However, customers with solar systems represent a very small portion of overall electricity customers—about 0.7 percent of U.S. residential customers in 2015, according to EIA data. Every state experienced growth in the number of customers with residential solar systems, although certain states, such as California and Hawaii, accounted for most of the growth and have had more widespread deployment. For example, about 14 percent of residences in Hawaii have installed a solar system, according to EIA data. Although comprehensive data on the deployment of electricity storage systems and smart devices are not available, the data and information provided by stakeholders GAO interviewed suggest their deployment is limited. The increasing residential deployment of solar systems and other technologies poses potential benefits and challenges, and some policymakers have implemented or are considering measures to address these, as GAO found in its analysis of reports and stakeholder interviews. Specifically, these technologies can provide potential benefits through more efficient grid operation, for example, if customers use these technologies to reduce their consumption of electricity from the grid during periods of high demand. Nonetheless, grid operators GAO interviewed said they have begun to confront grid management and other challenges in some areas as solar deployment increases. For example, in some areas of Hawaii, solar systems have generated more electricity than the grid was built to handle, which resulted in the need for infrastructure upgrades in these areas. However, grid operators reported that challenges generally have been manageable because overall residential solar deployment has been low. Policymakers in some states have implemented or are considering measures to maximize potential benefits and mitigate potential challenges associated with the increasing deployment of these technologies. For example, two states' regulators have required electricity suppliers to identify areas of the grid where solar and other technologies would be most beneficial to grid operation. In addition, several state regulators recently have allowed electricity suppliers to adopt voluntary time-based electricity prices that increase when demand for electricity is high, providing customers with an incentive to reduce consumption at these times, potentially by using solar, storage, and other technologies.
In our testimony, we stated that our audit and investigative work on FEMA disaster relief payments associated with hurricanes Katrina and Rita identified additional indications of fraud, waste, and abuse. Specifically, we found that FEMA made nearly $17 million in potentially improper and/or fraudulent rental assistance payments to individuals after they had moved into FEMA trailers. For example, after FEMA provided a trailer to a household—in January 2006—FEMA provided rental assistance payments to the same household in late January, February, and April of 2006 totaling approximately $5,500. In addition, FEMA provided potentially improper and/or fraudulent rental assistance payments to individuals living in FEMA-provided apartments. For example, FEMA made nearly $46,000 in rental assistance payments to at least 10 individuals living in apartments at the same time that the apartments were being paid for by FEMA through the city of Plano, Texas. Seven of 10 in this group self-certified to FEMA that they needed rental assistance, despite the fact that they were living in rent-free housing. Because of limitations in FEMA data, we were not able to identify the full extent of potentially improper rental assistance payments made to individuals in FEMA-provided apartments. We also found that nearly $20 million in potentially improper and/or fraudulent payments went to individuals who, using the same property, registered for assistance for both hurricanes Katrina and Rita. With few exceptions, FEMA officials explained that victims of both disasters are entitled to only one set of IHP payments for the same damaged property. However, FEMA officials told us that to increase the speed with which FEMA could distribute disaster assistance, they turned off the system edits that should have identified these types of duplicate payments. Consequently, FEMA paid over 7,000 individuals IHP assistance twice for the same property—once for Hurricane Katrina and once for Hurricane Rita. These individuals received double payments for expedited assistance, rental assistance, and/or housing replacement. For example, FEMA records showed that one registrant received two housing replacement payments of $10,500 each, despite the fact that he had only one property to replace. Millions of dollars of improper and potentially fraudulent payments also went to nonqualified aliens, including foreign students and temporary workers. For example, FEMA improperly paid at least $3 million in IHP assistance to more than 500 ineligible foreign students at four universities. Further, FEMA provided IHP payments that included expedited assistance and personal property totaling more than $156,000 to 25 individuals who claimed to be foreign workers on temporary visas. FEMA made these payments despite having copies of the work visas for several individuals, which should have alerted FEMA that the temporary workers were not eligible for financial assistance. Social Security Administration records also showed many of the individuals used invalid Social Security numbers, which could have alerted FEMA about the individuals’ ineligibility. In addition, several students and university officials stated that FEMA personnel encouraged all students—including international students who did not qualify for IHP assistance—that they were eligible for IHP financial assistance. Because we did not obtain information from all universities in the Gulf region and because of unavailability of detailed data on other nonqualified legal aliens, we were not able to determine the magnitude of improper and/or fraudulent payments in this area. Our findings also showed that the small amount of money that FEMA has been able to collect from improper payments further demonstrates the need to have adequate preventive controls. We previously reported that inadequate preventive controls related to the IHP application process resulted in an estimated $1 billion of potentially improper and/or fraudulent payments through February 2006. In contrast, as of November 2006, FEMA had detected through its own processes about $290 million in overpayments. This overpayment amount, which FEMA refers to as recoupments, represents the improper payments that FEMA had detected and had issued letters requesting repayments. However, through November FEMA had only collected nearly $7 million. Collection of only $7 million of an estimated $1 billion of fraudulent and improper payments clearly supports the basic point we have previously made that fraud prevention is far more efficient and effective than detection and collection. With respect to findings regarding the DHS purchase card program, we found weaknesses and breakdowns in accountability for property items bought for hurricanes Katrina and Rita relief efforts using government purchase cards. For example, FEMA is still unable to locate 48 of the 143 missing items (e.g., laptop computers, printers, and GPS units) identified in our July 2006 testimony. Moreover, 37 items were missing from an additional 103 items that we investigated for the July testimony. Thus, over a year after they were purchased, FEMA could not locate 85 of the 246 items (34 percent) that we investigated; we presume these items are now lost or stolen. Our investigation also revealed that although FEMA was in possession of 18 of the 20 flat-bottom boats it had purchased for hurricane relief efforts, FEMA had not received the title to any of these boats. FEMA could not provide any information about the location of the remaining two boats. In response to our December testimony, FEMA acknowledged weaknesses in the processes and systems that resulted in ineligible individuals receiving assistance. FEMA stated that in the 15 months since Hurricane Katrina, FEMA has made great strides in correcting its deficiencies. Examples of improvements FEMA has informed us that it put into service include an upgraded registration application that FEMA expects will prevent duplicate registrations and an identity verification process so that all registrations for assistance are subjected to the same stringent criteria. FEMA believes that the stringent controls it instituted this past year improve its safeguards and will help eliminate processing errors and fraudulent abuse. FEMA further stated that it will consider and evaluate any new findings that can assist in improving its processes and procedures. Based on the findings in our testimony of December 6, 2006, we are recommending that the Secretary of Homeland Security direct the Director of FEMA to take a number of actions to reduce the potential for fraud and abuse. Recommendations include developing controls to prevent duplicate rental assistance benefits, increasing controls to prevent ineligible nonqualified aliens from receiving payments, and enabling controls to prevent duplicate payments to the same individual across multiple disasters. FEMA concurred with all recommendations and responded that it had taken, or is in the process of taking, actions to implement these recommendations. However, in its response FEMA indicated that on two of the recommendations it planned to perform investigations to determine the extent of the problems identified prior to implementing the recommendations. Ineffective preventive controls for FEMA’s IHP have resulted in substantial fraudulent and improper payments. The additional examples of potentially fraudulent and improper payments, totaling tens of millions of dollars, that we highlighted in our December 2006 testimony further show that our estimate of $1 billion in potentially improper and/or fraudulent payments through February is likely understated. In addition, we did not include in this total potentially improper and/or fraudulent payments to individuals who received disaster assistance from FEMA even though they also received insurance payments for damaged property. With respect to property bought with government purchase cards, FEMA’s inability to find items 1 year after they were purchased, including laptop computers, printers, and GPS units, shows that FEMA property accountability controls are ineffective and possibly resulted in the loss or theft of government property. We have previously provided 25 recommendations to DHS and FEMA to improve management of IHP and the purchase card program. FEMA and DHS had fully concurred with 19 recommendations, and substantially or partially concurred with the remaining 6 recommendations. DHS and FEMA also reported that they have taken actions, or plan to take actions, to implement all our recommendations. While we have not performed work to determine whether FEMA’s actions adequately address our recommendations, if properly implemented, our recommendations from previous and current work should allow DHS and FEMA to rapidly provide assistance to disaster victims while at the same time providing reasonable assurance that disaster assistance payments are accurate and properly authorized. As we have stated in prior reports addressing IHP improper and fraudulent payments, these recommendations only address specific weakness identified in this report and are only part of a comprehensive fraud prevention program that should be in place. Further, FEMA should ensure that there are adequate manual processes in place to allow registrants who are incorrectly denied assistance to expeditiously appeal the decision and receive aid. Also, FEMA should fully field test all changes to provide assurance that valid registrants are able to apply for and receive IHP payments. We recommend that the Secretary of Homeland Security direct the Director of FEMA to take the following six actions to address weaknesses identified in the administration of IHP. To prevent rental assistance payments from being provided at the same time that FEMA provides free housing (including trailers, mobile homes, and apartments), FEMA should develop processes for comparing IHP registrant data with FEMA direct housing assistance data to prevent IHP registrants from receiving payments for rental assistance covering the time they are living in FEMA-provided housing and provide clear guidance to IHP registrants, including rental assistance registrants, indicating how the payments are to be used. With respect to duplicate assistance payments across multiple disasters, FEMA should implement and/or enable controls to prevent duplicate payments to the same individual from different disasters for the same damage done to the same address. To prevent improper payments to nonqualified aliens, FEMA should provide clear guidance and training to FEMA and contractor employees on the specific types of aliens eligible for financial disaster assistance, and identify nonqualified aliens, and develop processes to identify and deny assistance to nonqualified aliens who register for IHP assistance using valid Social Security numbers through data comparisons with agencies that maintain data on legal aliens with Social Security numbers. With respect to property bought with DHS purchase cards, if FEMA cannot locate this property in a reasonable time period, it should work with DHS to reconcile its tracking system data and declare these items lost or stolen. On February 15, 2007, FEMA provided written comments on a draft of this report in which it outlined actions it plans to take or has taken that are designed to address each of our six recommendations. FEMA’s comments are reprinted in appendix II. FEMA provided examples of several planned actions to address identified weaknesses. For example, concerning our recommendation to provide clear guidance to victims receiving IHP rental assistance on how funds should be used, FEMA stated that it is conducting a comprehensive review of existing communications policies and is developing a more effective strategy to ensure that registrants understand IHP and its purpose. Additionally, in response to our recommendation to develop processes to identify and deny assistance to nonqualified aliens who register for IHP assistance, FEMA stated that it is reaching out to other federal agencies and commercial vendors in order to enhance FEMA’s ability to screen out applications from nonqualified aliens. FEMA’s response indicates that it is attempting to address problems we identified in IHP. As the federal government prepares for future disasters, it will be important for FEMA to establish effective controls to prevent fraudulent and improper payments before they occur. However, in its responses to our recommendations concerning actions to prevent duplicate housing assistance and housing damage repair assistance, FEMA also stated it planned to perform additional investigations to confirm that the conditions described in our draft report are in fact representative of systemic problems before initiating appropriate corrective actions. Nonetheless, we continue to believe, as discussed in our testimony (see app. I), that our work amply demonstrates the systemic nature of the problems identified and the need for the recommended corrective actions. Specifically, with respect to our recommendation on preventing individuals from receiving rental assistance payments while residing in FEMA-provided housing (apartments and trailers), we continue to believe our work demonstrates a systemic problem exists. In fact the $17 million in potentially duplicate rental assistance paid to thousands of IHP registrants is conservative and may even understate the extent of the problems. In addition, our case studies clearly showed payments that were at least improper and potentially fraudulent. Further, our work included steps to minimize the possibility that, as FEMA asserted, many of these cases could be explained by the fact that rental assistance payments could have been made retroactively to cover rental expenses prior to the date of payment. Specifically, in arriving at our estimate of the extent of a systemic problem in this area, we took the following steps to ensure that our reported estimate of the extent of potentially duplicate payments in this area did not overstate the problem. We only included payments as potential duplicates when they were made to an IHP registrant at the same time that the registrant was residing in FEMA-provided housing. We did not consider payments made before a registrant moved in to FEMA-provided housing as duplicates even though FEMA often makes advance rental assistance payments. For example, FEMA provided more than $3 million in rental assistance payments to FEMA trailer registrants in the week before they moved into FEMA trailers. These payments averaged more than $1,700, which indicates they were likely for multiple months of rental assistance and could have been duplicate assistance payments because they would have covered the time the registrants were in FEMA trailers. We conducted field investigations on case studies to ensure that conclusions reached were accurate. We excluded from our analysis any payments made to IHP registrants living in FEMA-provided apartments. Those payments were excluded from the analysis because FEMA failed to maintain detailed reliable data on individuals living in FEMA-provided apartments. Thus there are potentially millions more in duplicate rental assistance payments associated with IHP registrants living in FEMA-provided apartments, as supported by our case study investigations. As discussed in our testimony, our work also clearly demonstrates a systemic problem and our recommended corrective action with respect to controls to prevent duplicate payments to the same individual for the same damage across multiple disasters. FEMA stated it was unsure whether all payments we identified as duplicates were in fact duplicate payments to the same individual for the same damage across multiple disasters. FEMA stated that some payments could have resulted from damage from Hurricane Katrina, and then future payments were made based on different damage caused by Hurricane Rita. However, this assertion is contrary to representations FEMA made to us during the course of the audit. Specifically, FEMA told us during the audit that with few exceptions, registrants would only be entitled to one payment for each damage and/or need. We acknowledge that a registrant could have had a house damaged by Hurricane Katrina, and could have repaired the damage and moved back into the original house—only to have it damaged again by Hurricane Rita. However, this it is an extremely unlikely scenario given the severity of the damage caused by Hurricane Katrina and the fact that Hurricane Rita occurred shortly after, leaving very little time for inspectors to inspect and certify housing damage between storms, especially given there were more than 7,000 registrants we identified. According to our case studies, FEMA performed the first inspection of the properties in question after both hurricanes affected the area. Our case studies also showed that FEMA used two different inspectors to look at damaged properties, once for Hurricane Katrina and once for Hurricane Rita. Without having an inspection performed before Hurricane Rita hit, or having the same inspector review the claim to determine what damage was from Hurricane Rita and what damage was from Hurricane Katrina, FEMA is not in a position to know whether it paid for the same damaged items twice. Therefore, we continue to believe our work demonstrates a systemic problem for which FEMA should institute our recommendation to institute controls that prevent duplicate payments to the same individual for the same damage registered for under different disasters. We are sending copies of this report to the Secretary of Homeland Security, and the Director of the Federal Emergency Management Agency. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. If you or your staffs have questions about this report, please contact me at (202) 512-7455 or kutzg@gao.gov; or contact John Kelly at (202) 512-6926 or kellyj@gao.gov. Other individuals who made major contributions to this report were Gary Bianchi, Jennifer Costello, Jason Kelly, Barbara Lewis, Jonathan Meyer, Andrew McIntosh, John Ryan, and Tuyet-Quan Thai. Hrran Ktrina anRidetroed homes and dispced mllns of inuas. Whle the Federl EmerManagemet A (FEMA) coinu to respd to thisisaster, GAO’s prevus work deed signiant cotrol wess—speclly in FEMA’s Iua and Household Prom (IHP) anin the Departmet of HomelanSecy’s (DHS) purchase crd rom—resulting in signiant fraud, waste, anabus. FEMA coinued to loe tens of mllns of doll througoteially iroer and/or fraulepayme from oth hrran Ktrina anRia. Thepayme inclde $17 mll in reassisance paid to inuato whom FEMA hlre rovded free housing through trailer or apartmes. Ie case, FEMA rovded free housing to 10 inua in apartme in Plano, Texas, whle t the same tme t theinua$46,000 to cover ot-of-ocket housingxpnss. I dd, everl of theinuacerted to FEMA tht the eeded reassisance. Today’s temonyll ddress whether FEMA rovded roer anoteiall fraulet (1) reassisance payme to registran t the same tme t was proving free housing ia trailer anapartmes; (2) dupte assisance payme to inuawho claimed dag to the same roert for oth hrranKtrina anRia; and (3) IHP payme to -U.S. residewho dot qua for IHP. This temonyll o disuss (1) the ortance of frauden anreve, and (2) the resultof oinveiga into roertFEMA ught using DHS purchase crds. FEMA mde rl $20 mll inupte payme to thousan of inua who claimed dag to the same roert from oth hrran Ktrina anRia. FEMA o mde mllns in oteiall roer and/or fraulepayme to uaed ns who were ot eligible for IHP. For exale, FEMA pait least $3 mll to more than ineligible foreign de t founiversi in the ffected reas. This mount lkel underte the totpayme to ineligible foreign de ecaust doe ot cover ll colle anuniversis in the rea. FEMA rovded oteiall roer and/or fraulet IHP assisance to other ineligible -U.S. reside, despite hing docme ining theineligibiy. Finall, FEMA’sfflt in deying and collecting roer paymerther emasized the ortance of lemeing an effectve fraud, waste, anabusreve system. For exale, GAO revusted roer anoteiall fraulepayme relted to the IHP app rocess to e $1 bill through Feua. A of Novemer 2006, FEMA deed abt $290 mll in roer payme and collected abt $7 mlln. www..gov/cg-bin/getrpt?GAO-07-252T. To vew the fll prodct, nclng the cope nd methodology, clck on the lnk above. For more nformon, contct Gregory Ktz t (202) 512-7455 or ktzg@g.gov. GAO’s revus work o the DHS purchase crd howed signianrolemth roert ccounabiy. Of 246 tem we inveigated tht FEMA purchased for hrrane relef effort using DHS’s purchase crd, 8tem—or 34 ercet—re ll missing anresumed lot or tolen.
The Federal Emergency Management Agency (FEMA) continues to respond to hurricanes Katrina and Rita. GAO's previous work identified suspected fraud, waste, and abuse resulting from control weaknesses associated with FEMA's Individuals and Households Program (IHP) and the Department of Homeland Security's (DHS) purchase card program. Congress asked GAO to follow up on this previous work to determine whether potentially improper and/or fraudulent payments continued to be made. GAO testified on the results of our audit and investigative efforts on December 6, 2006. This report summarizes the results of our follow-up work. In our December 6, 2006, testimony, GAO stated that FEMA made tens of millions of dollars of potentially improper and/or fraudulent payments associated with both hurricanes Katrina and Rita. These payments include $17 million in rental assistance paid to individuals to whom FEMA had already provided free housing through trailers or apartments. In one case, FEMA provided free housing to 10 individuals in apartments in Plano, Texas, while at the same time it sent these individuals $46,000 to cover out-of-pocket housing expenses. In addition, several of these individuals certified to FEMA that they needed rental assistance. FEMA made nearly $20 million in duplicate payments to thousands of individuals who claimed damages to the same property from both hurricanes Katrina and Rita. FEMA also made millions in potentially improper and/or fraudulent payments to nonqualified aliens who were not eligible for IHP. For example, FEMA paid at least $3 million to more than 500 ineligible foreign students at four universities in the affected areas. This amount likely understates the total payments to ineligible foreign students because it does not cover all colleges and universities in the area. FEMA also provided potentially improper and/or fraudulent IHP assistance to other ineligible non-U.S. residents, despite having documentation indicating their ineligibility. Finally, FEMA's difficulties in identifying and collecting improper payments further emphasized the importance of implementing an effective fraud, waste, and abuse prevention system. For example, GAO previously estimated improper and potentially fraudulent payments related to the IHP application process to be $1 billion through February 2006. As of November 2006, FEMA identified about $290 million in overpayments and collected about $7 million. Finally, GAO's work on DHS purchase cards showed continuing problems with property accountability, including items GAO investigated that could not be located 1 year after they were purchased.
As of the time of this hearing, the CDFI Fund in the Department of the Treasury has authorized $21 billion of the $26 billion in tax credit authority to be awarded between 2001 and 2009 to CDEs that manage NMTC investments in low-income community development projects. Eligible organizations may apply for and receive NMTC allocations once they have been certified as a CDE by the CDFI Fund (a CDE that receives an allocation is often referred to as an allocatee). After the CDFI Fund makes allocations to CDEs, investors make equity investments by acquiring stock or a capital interest in the CDEs, called qualified equity investments (QEI), in exchange for the right to claim tax credits that total 39 percent of their original investment over 7 years. The CDEs, in turn, are required to invest “substantially all” of the proceeds they receive into qualified low-income community investments (QLICI). Qualified low- income community investments include (but are not limited to) investments in businesses, referred to as qualified active low-income community businesses (QALICB), to be used for residential, commercial and industrial projects, and other types of investments, such as purchasing loans from other CDEs. The CDFI Fund directs CDEs to classify themselves as minority if more than 50 percent of the CDE is owned or controlled by members of a minority ethnic group. In the case of a for-profit CDE, more than 50 percent of the CDE’s owners must be minorities; if the entity applying is a nonprofit organization, more than 50 percent of its board of directors must be minorities (or its Chief Executive Officer, Executive Director, General Partner, or Managing Member must be a minority). Representatives from several minority-owned entities and industry associations that we interviewed indicated that minority CDEs and other locally-based community lending organizations may have a better understanding of the economic conditions and availability of capital in the communities they serve than other investment organizations serving those same communities. However, in addition to minority CDEs obtaining NMTC authority and making investments in low-income communities, minority populations may benefit from the NMTC in other ways. For example, non- minority CDEs have also made investments in minority businesses that serve residents in low-income communities. Minority-owned businesses located in eligible NMTC census tracts may hire or provide services to minority residents in low-income communities. According to CDFI Fund officials, it is frequently the case that non-minority-owned businesses located in NMTC-eligible census tracts with highly concentrated minority populations could provide economic benefits to minority residents. Since we issued the report on which this statement is based, the CDFI Fund announced on May 27, 2009 an additional 32 NMTC awards to 2008 applicants totaling $1.5 billion under authority granted by the American Recovery and Reinvestment Act of 2009 (ARRA). According to our analysis, minority CDEs received three of these awards totaling $135 million. Non-minority CDEs received the other 29 of these awards totaling about $1.4 billion. The analysis presented in our report was limited to NMTC awards made from 2005 through the original 2008 awards; our analysis did not include the NMTC awards made in accordance with ARRA. From 2005 through 2008, minority-owned CDEs were successful with about 9 percent of the NMTC applications that they submitted to the CDFI Fund and received about $354 million of the $8.7 billion for which they applied, or about 4 percent. By comparison, non-minority CDEs were successful with about 27 percent of their applications and received $13.2 billion of the $89.7 billion for which they applied, or about 15 percent. Since 2005, the first year in which the CDFI Fund collected data on minority CDEs, CDFI Fund application data indicate that 68 minority CDEs have applied for NMTC allocations from the CDFI Fund for a total of 88 applications. Fifteen minority CDEs applied for NMTC allocations in multiple years. From 2005 through 2008, the CDFI Fund received 934 NMTC applications from 566 different CDEs. Of the 68 minority CDEs that applied, 6 CDEs received a total of eight NMTC allocations (2 minority CDEs each received two separate allocations). Minority applicants received about 2.6 percent of the $13.5 billion in total NMTC allocation authority that the CDFI Fund awarded from 2005 through 2008. The CDFI Fund’s process for making NMTC awards takes place in two phases. NMTC applications are first reviewed and scored by a group of external reviewers selected by the CDFI Fund who have demonstrated experience in business, real estate, or community development finance. CDEs that meet or exceed minimum thresholds in each of the four main application sections (business strategy, community impact, management capacity, and capitalization strategy) and an overall scoring threshold (out of a total of 25 points in each application section) advance to the second phase where they are re-ranked based on their scores in the business strategy and community impact sections of the application and half of the priority points awarded to CDEs that demonstrate a track record of investing in low-income communities and investing in unrelated entities. CDFI Fund staff review the amount of allocation authority that the CDE requested and, based on the information in the application materials, award allocation amounts in the descending order of CDEs’ final ranking based on their re-ranked scores. According to our analysis of NMTC application data, of the 88 applications submitted by minority CDEs, 31 applications met the minimum threshold scores to advance to the second phase of the NTMC review process from 2005 to 2008. By comparison, during this same time period 518 of the 846 applications submitted by non-minority CDEs met the minimum thresholds to advance to the second phase of the review process. Overall, non-minority CDEs scored about 11 points higher than minority CDEs on NMTC applications from 2005 through 2008. As figure 1 shows, minority CDEs’ scores differed the most from non-minority CDEs’ scores in the capitalization strategy section of the application, where non-minority CDEs scored 25 percent higher than minority CDEs. Non-minority CDEs scored between 15 percent and 17 percent higher than minority CDEs in the business strategy, community impact, and management capacity sections of the application. To identify challenges minority and non-minority CDEs face in obtaining NMTC allocations, we interviewed representatives from minority and non- minority CDEs, and we analyzed CDFI Fund application data. While both our testimonial evidence and statistical analysis have limitations, they generally show that a CDE’s capacity, measured by asset size in this case, is associated with an increased probability of obtaining an award. CDEs we interviewed generally said it can be difficult on the NMTC application to demonstrate the capacity to effectively use the NMTC and the experience in investing in low-income communities necessary to obtain allocations. According to officials from several CDEs we interviewed, demonstrating the relative impact of NMTC projects through the NMTC application may be particularly difficult when smaller, community-based CDEs compete for allocations against large banks and financial institutions that may have the capacity to undertake larger projects with more easily identifiable economic impacts. Our statistical analysis of all CDEs that applied from 2005 through 2008 demonstrates that the probability that a NMTC applicant will receive an award is associated with certain factors. For example, after controlling for other characteristics, larger CDEs, as measured by asset size, appear to be more likely to receive NMTC awards, while smaller CDEs are less likely to receive awards. When controlling for factors we could, our analysis also shows that minority status is associated with a lower probability of receiving an allocation. It is not clear from our analysis why minority status is associated with a lower probability of obtaining an allocation or whether any actions taken or not taken by the Department of the Treasury or the CDFI Fund contributed to this statistical relationship. Other factors for which our statistical analysis is unable to account, such as experience with the application process, may also be reasons why minority CDEs have not been as successful in obtaining NMTC allocations as non- minority CDEs. For example, according to our 2006 report, certain minority-owned banks have higher loan loss reserves and operating costs than non-minority owned peers. These types of characteristics could potentially affect the competitiveness of minority CDE NMTC applications, particularly in the business strategy and management capacity sections of the applications. Also, according to industry association representatives, minority-owned banks have traditionally had a more difficult time accessing capital markets than their non-minority peers, and our analysis of the CDFI Fund application data show that minority CDEs score lowest in the capitalization strategy section of the application. Our analysis indicates that these differences are not explained by the size of the CDE—that is, they are not problems shared, on average, by other small, non-minority CDEs that applied for NMTC allocations. However, these differences could be associated with some other feature that minority CDEs share with non- minority CDEs for which we do not have data to include in our analysis. According to CDFI Fund officials, the CDFI Fund has conducted outreach intended to reach all CDEs that may have an interest in applying for NMTCs and CDFI Fund staff have given presentations to industry associations, such as the New Markets Tax Credit Coalition; the National Bankers Association (NBA), an industry organization that represents minority-owned banks; and at FDIC conferences targeted to minority- owned institutions. According to CDFI Fund officials, they have more recently developed a relationship with the Department of Commerce’s Minority Business Development Agency that they hope will lead to additional applications by minority CDEs. The CDFI Fund also provides a written debriefing to each CDE that does not receive an allocation to assist the CDE in future application rounds. This debriefing provides the unsuccessful CDE with information about its scores in each of the application sections and written comments on areas of weakness within each of the four main application sections. Officials from some CDEs we interviewed noted that the debriefing document helped them submit more competitive application materials in future rounds. Officials from a few CDEs noted that the debriefing comments were not consistent from one year to another. External stakeholders, including representatives from industry associations we identified, hold conferences and offer varying degrees of assistance to CDEs submitting competitive NMTC applications. In addition, CDEs often hire consultants to assist them with completing their NMTC applications. Consultants offer a range of services to CDEs, including reviewing NMTC applications for completeness and depth of responses to completing the entire NMTC application for an applicant. According to CDEs we interviewed, fees charged by consultants cover a broad range based on the services that the consultant provides. For example, officials from several CDEs indicated that they paid consultants less than $5,000 to review their NMTC applications while others paid consultants as much as $50,000 for a more complete set of services. The legislative history for the NMTC does not address whether Congress intended for minority CDEs to benefit directly from the NMTC program. However, if Congress intends for minority CDEs’ participation in the NMTC program to exceed the current levels and Congress believes that minority CDEs have unique characteristics that position them to target the NMTC to its most effective use, Congress may want to consider legislative changes to the program should the New Markets Tax Credit be extended beyond 2009. Potential changes that could be considered include, but would not be limited to the following: (1) similar to provisions for certain federal grant programs, requiring that a certain portion of the overall amount of allocation authority be designated for minority CDEs; (2) in accordance with information we obtained in discussions with several experts in economic development, exploring the potential for creating a pool of NMTC allocation authority to be dedicated specifically for community banks (minority banks that are certified CDEs, in most cases, would likely compete with non-minority community banks with similar characteristics for NMTC allocations); or (3) similar to other federal programs where preferences are given to targeted populations, offering priority points to minority CDEs that apply for NMTC allocations. In addition, a fourth option would be for Congress to direct the Department of the Treasury and the CDFI Fund to explore options for providing technical assistance in applying for and using NMTC allocations to minority CDEs. Although these options could increase the amount of NMTC authority awarded to minority CDEs, in part because we could not definitively identify the reasons why minority CDEs have scored lower on the NMTC application than non-minority CDEs, the options may not address the underlying reasons for lower minority CDE success. In addition, implementing these changes would require addressing a number of issues, including legal and administrative concerns, associated with such changes in the NMTC application process. The CDFI Fund reviewed a draft of our report and agreed with our key conclusion that minority CDEs have not received awards in proportion to their representation in the application pool, but did not comment on our options. The CDFI Fund’s response letter is reprinted in appendix VII of our report. The CDFI Fund also provided several technical comments on our report, which we incorporated as appropriate. Chairmen, this concludes my remarks. As I noted earlier, the more detailed findings and conclusions of our review of minority CDEs’ participation in the New Markets Tax Credit program can be found in our recently issued report (GAO-09-536). I would be happy to answer any questions you or other members of the subcommittees may have. For further information on this testimony, please contact Michael Brostek at (202) 512-9110 or brostkem@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the individual named above, Kevin Daly, Assistant Director; LaKeshia Allen; Don Brown; Thomas Gilbert; Cristian Ion; Jean McSween; Ed Nannenhorn; and Cheryl Peterson made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Community Development Financial Institutions (CDFI) Fund in the Department of the Treasury has awarded $21 billion of the $26 billion in New Markets Tax Credits (NMTC) authorized to be awarded to Community Development Entities (CDE) between 2001 and 2009. CDEs use the NMTC to make qualified investments in low-income communities. Recent congressional interest has focused on participation by minority CDEs. This testimony is based on a recent GAO report (GAO-09-536). As requested, the report (1) identified the number of minority and non-minority CDEs that applied to the CDFI Fund and received NMTC awards, (2) explained the process by which the CDFI Fund makes awards and summarized application scores, (3) described challenges minority and non-minority CDEs face in applying for and receiving awards and, (4) identified efforts the CDFI Fund and others have taken to assist minority CDEs in applying for awards. GAO analyzed CDFI Fund application data and interviewed officials from minority and non-minority CDEs, the CDFI Fund, and industry groups. From 2005 through 2008, minority-owned CDEs were successful with about 9 percent of the NMTC applications that they submitted to the CDFI Fund and received about $354 million of the $8.7 billion for which they applied, or about 4 percent. Non-minority CDEs were successful with about 27 percent of their applications and received $13.2 billion of the $89.7 billion for which they applied, or about 15 percent. Since GAO issued the report on which this statement is based, the CDFI Fund made 32 NMTC awards totaling $1.5 billion under authority provided in the American Recovery and Reinvestment Act. Minority CDEs received 3 of those awards, totaling $135 million. The CDFI Fund relies primarily on its scoring of applications to determine which CDEs receive awards. As the figure shows, minority CDEs received lower scores than non-minority CDEs in each of the four application sections. Although a CDE's resources and experience in applying are important factors in a CDE's success rate with the NMTC program, when controlling for factors that GAO could measure, minority status is associated with a lower probability of receiving an allocation. It is not clear from GAO's analysis why this relationship exists or whether any actions taken or not taken by the Department of the Treasury contributed to minority CDEs' lower probability of success. Characteristics associated with minority status of some CDEs for which data are unavailable may affect this relationship. If Congress views increased participation by minority CDEs as a goal for the NMTC program, options, such as providing certain preferences in the application process that may benefit minority CDEs, could be considered. The CDFI Fund provides assistance that is available to all CDEs applying for awards, including a written debriefing to CDEs that do not receive awards detailing some of the weaknesses in the applications. Other stakeholders, including industry associations and consultants, hold conferences and offer services to help CDEs submit competitive applications. Should Congress view additional assistance to minority CDEs as important to increasing minority CDEs' participation in the NMTC program, it could consider requiring the CDFI Fund to provide assistance to minority CDEs.
The federal-state UI program, created in part by the Social Security Act of 1935, is administered under state law based on federal requirements. The primary objectives of the program are to provide temporary, partial compensation for lost earnings of eligible individuals who become unemployed through no fault of their own and to stabilize the economy during downturns. Applicants for UI benefits must have earned at least a certain amount in wages and/or have worked a certain number of weeks to be eligible. In addition, these individuals must, with limited exceptions, be available for and able to work, and actively search for work. The federal-state structure of the program places primary responsibility for its administration on the states, and gives them wide latitude to administer the programs in a manner that best suits their needs within the guidelines established by federal law. Within the context of the federal- state partnership, Labor has general responsibility for overseeing the UI program to ensure that the program is operating effectively and efficiently. For example, Labor is responsible for monitoring state operations and procedures, providing technical assistance and training, and analyzing UI program data to diagnose potential problems. State agencies rely extensively on IT systems to carry out their UI program functions. These include systems for administering benefits and for collecting and administering the taxes used to fund the programs. Benefit systems are used for determining eligibility for benefits; recording claimant filing information, such as demographic information, work history, and qualifying wage credits; determining updates as needed, such as changes in work-seeking status; and calculating state-specific weekly and maximum benefit amounts. Tax systems are used for online reporting and payment of employers’ tax and wage reports; calculating tax, wage, and payment adjustments, and any penalties or interest accrued; processing quarterly tax and wage amounts; determining and processing late payment penalties, interest, civil penalties, or fees; and adjusting previously filed tax and wage reports as a result of a tax audit, an amended report submitted by the employer, or an erroneously keyed report. However, the majority of the states’ existing systems for UI operations were developed in the 1970s and 1980s. Although some agencies have performed upgrades throughout the years, most of the state legacy systems have aged considerably. As they have aged, the systems have presented challenges to the efficiency of states’ existing IT environments. In a survey published by the National Association of State Workforce Agencies (NASWA) in 2010, states reported the following issues: Over 90 percent of the systems run on outdated hardware and software programming languages, such as Common Business Oriented Language (COBOL), which is one of the oldest computer programming languages. The systems are costly and difficult to support. The survey found, for example, that over two-thirds of states face growing costs for mainframe hardware and software support of their legacy systems. Most states’ systems cannot efficiently handle current workload demands, including experiencing difficulties implementing new federal or state laws due to constraints imposed by the systems. States have realized an increasing need to transition to web-based online access for UI data and services. States also cited specific issues with their legacy systems, including the fact that they cannot be reprogrammed quickly enough to respond to changes resulting from legislative mandates. In addition, states have developed one or more stand-alone ancillary systems to fulfill specific needs, but these systems are not integrated with their legacy mainframe systems, decreasing efficiency. Finally, according to the states, existing legacy systems cannot keep up with advances in technology, such as the move to place more UI services online. In addition to providing general oversight of the UI program, the Department of Labor plays a role in facilitating the modernization of states’ UI IT systems. This role consists primarily of providing funding and technical support to the state agencies. In this regard, Labor distributes federal funds to each state for the purpose of administering its UI program, including funds that can be used for IT modernization. Through supplemental budget funds, Labor has supported the establishment of state consortiums, in which three or four states work together to develop and share a common system. These efforts are intended to allow multiple states to pool their resources and reduce risk in the pursuit of a single common system that they can each use after applying state-specific programming and configuration settings. Labor also helps to provide technical assistance to the states by supporting and participating in two key groups—NASWA and the Information Technology Support Center (ITSC). NASWA provides a forum for states to exchange information and ideas about how to improve program operations; serves as a liaison between state workforce agencies and federal government agencies, Congress, businesses, and intergovernmental groups; and is the collective voice of state agencies on workforce policies and issues. ITSC is funded by Labor and the states to provide technical services, core projects, and a central capacity for exploring the latest technology for all states. ITSC’s core services to states include application development, standards development, and UI modernization services, among others. Our September 2012 report noted that selected states had made varying progress in modernizing the IT systems supporting their UI programs. Specifically, we found that each of the three states that were part of a multistate consortium were in the initial phases of planning that included defining business needs and requirements; two individual states were in the development phase—that is, building the system based on requirements; two were in a “mixed” phase where part of the system was in development and part was in the operations and maintenance phase; and two were completed and in operations and maintenance. These efforts had, among other things, enhanced states’ UI technology to support web-based services with more modern databases and replaced outdated programming languages. They also included the development of auxiliary systems, such as document management systems and call center processing systems. Nevertheless, while the states had made progress, we found that they faced a number of challenges related to their modernization efforts. In particular, individual states encountered the following challenges, among others: All nine states cited limited funding and/or the increasing cost of UI systems as a major challenge. For example, they said that the economic downturn had resulted in smaller state budgets, which limited state funds for IT modernization. Moreover, once funds were identified or obtained, it often took a considerable amount of time to complete the IT project. Officials added that developing large state or multistate systems may span many years, and competing demands on resources can delay project implementation. As a result, states may fund one phase of a project with the hope that funds will be available in the future for subsequent phases. This lack of consistent funding potentially hinders effective IT project planning. Seven of the nine states cited a lack of staff in their UI offices with the expertise necessary to manage IT modernization efforts: Several states said they lacked sufficient subject matter experts knowledgeable in the extensive rules and requirements of the UI program. Such experts are essential to helping computer designers and programmers understand the program’s business processes, supporting an effective transition to the reengineered process, and identifying system requirements and needs. States also identified challenges in operating and maintaining a system developed by vendors because state employees may have lacked the needed expertise to maintain the new system once the vendor staff leave. The states added that their staffs may implement larger-scale systems only once every 10 to 15 years, leading to gaps in required knowledge and skills, process maturity and discipline, and executive oversight. States further stressed that their staffs may have expertise in an outdated computer language, while modernization efforts require them to learn new skills and more modern programming languages. According to a 2011 workforce survey, over 78 percent of state chief information officers confirmed that state salary rates and pay grade structures presented a challenge in attracting and retaining skilled IT talent. According to Labor, the limited staff resources facing states have required that subject matter experts be pulled off projects to address the workload demands of daily operations. Six of the nine states noted that continuing to operate their legacy systems while simultaneously implementing new UI systems required them to balance scarce staff resources between the two major efforts. In addition to the challenges facing individual states, we found that states participating in multistate consortiumschallenges: encountered a separate set of Representatives from all three consortiums indicated that differences among states in procurement, communication, and implementation of best practices; the involvement of each state’s IT office; and the extent to which the state’s IT is centralized could impact the effort to design and develop a common system. As a result, certain state officials told us that consortiums were not practical; one official questioned whether a common platform or system could be successfully built and made transferable among states in an economically viable way. States within a consortium often had different views on the best approach to developing and modernizing systems. State officials said that using different approaches to software development is not practical when developing a common system, but that it was difficult to reach consensus on a single approach. In one case, a state withdrew from a consortium because it disagreed with the development approach being taken by the consortium. States had concerns about liabilities in providing services to another state. IT representatives from one consortium’s lead state noted that decisions taken by the lead state could result in blame for outcomes that other states were unsatisfied with, and there was a concern that the lead state’s decision making could put other states’ funds at risk. One state withdrew from its leadership position because of such concerns about liability. Reaching agreement on the location of system resources could also be a challenge. For example, one consortium encountered difficulty in agreeing on the location of a joint data center to support the states and on the resources that should be dedicated to operating and managing the facility, while complying with individual state requirements. All three consortium representatives we spoke to noted that obtaining an independent and qualified leader for a multistate modernization effort was challenging. State IT project managers and chief information officers elaborated that while each state desires to successfully reach a shared goal, the leader of a consortium must keep the interests of each state in balance and have extensive IT experience that goes beyond his or her own state’s technology environment. Both individual states and consortium officials had developed methods to mitigate specific challenges and identified lessons learned. For example, several states were centralizing and standardizing their IT operations to address technical challenges; found that a standardized, statewide enterprise architecture could provide a more efficient way to leverage project development; and took steps to address consortium challenges they encountered, such as ensuring that each state’s IT department is involved in the project. In our report, we noted that ITSC had been tasked with preparing an assessment of lessons learned from states’ modernization efforts, but at the time of our review, this assessment had not been completed. Moreover, the scope of the assessment was limited to ITSC’s observations and had not been formally reviewed by the states or Labor. A comprehensive assessment would include formal input from states and consortiums, the ITSC Steering Committee, and Labor. Accordingly, we recommended that Labor (1) perform a comprehensive analysis of lessons learned and (2) distribute the analysis to each state through an information-sharing platform or repository, such as a website. Labor generally agreed with the first recommendation; it did not agree or disagree with the second recommendation but said it was committed to sharing lessons learned. In addition, the nine states in our review had established, to varying degrees, certain IT management controls that aligned with industry- accepted program management practices. These controls included the following: establishing aspects of a project management office for centralized and coordinated management of projects under its domain; incorporating industry-standard project management processes, tools, and techniques into their modernization UI efforts; adopting independent verification and validation to verify the quality of the modernization projects; and employing IT investment management standards, such as those called for in our IT investment management framework. If effectively implemented, these controls could help successfully guide the states’ UI modernization efforts. In summary, while states have taken steps to modernize the systems supporting their UI programs, they face a number of challenges in updating their aging legacy systems and moving program operations to a modern web-based IT environment. Many of the challenges pertain to inconsistent funding, a lack of sufficient staff with adequate expertise, and in some cases, the difficulty of effective interstate collaboration. States have begun to address some of these challenges, and the nine states in our review had established some IT management controls, which are essential to successful modernization efforts. In addition, the Department of Labor can continue to play a role in supporting and advising states in their efforts. Chairman Reichert, Ranking Member Doggett, and Members of the Subcommittee, this concludes my statement. I would be happy to answer any questions at this time. If you have any questions concerning this statement, please contact Valerie C. Melvin, Director, Information Management and Technology Resources Issues, at (202) 512-6304 or melvinv@gao.gov. Other individuals who made key contributions include Christie Motley, Assistant Director; Lee A. McCracken; and Charles E. Youman. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The joint federal-state unemployment insurance program is the Department of Labor's largest income maintenance program, and its benefits provide a critical source of income for millions of unemployed Americans. The program is overseen by Labor and administered by the states. To administer their UI programs, states rely heavily on IT systems--both to collect and process revenue from taxes and to determine eligibility and administer benefits. However, many of these systems are aging and were developed using outdated computer programming languages, making them costly and difficult to support and incapable of efficiently handling increasing workloads. Given the importance of IT to state agencies' ability to process and administer benefits, GAO was asked to provide testimony summarizing aspects of its September 2012 report on UI modernization, including key challenges states have encountered in modernizing their tax and benefit systems. To develop this statement, GAO relied on its previously published work. As GAO reported in September 2012, nine selected states had made varying degrees of progress in modernizing the information technology (IT) systems supporting their unemployment insurance (UI) programs. Specifically, the states' modernization efforts were at various stages--three were in early phases of defining business needs and requirements, two were in the process of building systems based on identified requirements, two were in a "mixed" phase of having a system that was partly operational and partly in development, and two had systems that were completely operational. The enhancements provided by these systems included supporting web-based technologies with more modern databases and replacing outdated programming languages, among others. Nevertheless, while taking steps to modernize their systems, the selected states reported encountering a number of challenges, including the following: Limited funding and the increasing cost of UI systems . The recent economic downturn resulted in smaller state budgets, limiting what could be spent on UI system modernization. In addition, competing demands and fluctuating budgets made planning for system development, which can take several years, more difficult. A lack of sufficient expertise among staff . Selected states reported that they had insufficient staff with expertise in UI program rules and requirements, the ability to maintain IT systems developed by vendors, and knowledge of current programming languages needed to maintain modernized systems. A need to continue to operate legacy systems while simultaneously implementing new systems . This required states to balance scarce resources between these two efforts. In addition, a separate set of challenges arose for states participating in multistate consortiums, which were established to pool resources for developing joint systems that could be used by all member states: Differences in state laws and business processes impacted the effort to design and develop a common system. States within a consortium differed on the best approach for developing and modernizing systems and found it difficult to reach consensus. Decision making by consortium leadership raised concerns about liability for outcomes that could negatively affect member states. Consortiums found it difficult to obtain a qualified leader for a multistate effort who was unbiased and independent. Both consortium and individual state officials had taken steps intended to mitigate challenges. GAO also noted that a comprehensive assessment of lessons learned could further assist states' efforts. In addition, the states in GAO's review had established certain IT management controls that can help successfully guide modernization efforts. These controls include establishing a project management office, using industry-standard project management guidance, and employing IT investment management standards, among others. In its prior report on states' UI system modernization efforts, GAO recommended that the Department of Labor conduct an assessment of lessons learned and distribute the analysis to states through an information-sharing platform such as a website. Labor agreed with the first recommendation; it neither agreed nor disagreed with the second recommendation, but stated that it was committed to sharing lessons learned.
BSA established reporting, recordkeeping, and other anti-money- laundering (AML) requirements for financial institutions. By complying with BSA/AML requirements, U.S. financial institutions assist government agencies in the detection and prevention of money-laundering and terrorist financing by maintaining effective internal controls and reporting suspicious financial activity. BSA regulations require financial institutions, among other things, to comply with recordkeeping and reporting requirements, including keeping records of cash purchases of negotiable instruments, filing reports of cash transactions exceeding $10,000, and reporting suspicious activity that might signify money laundering, tax evasion, or other criminal activities. In addition, financial institutions are required to have AML compliance programs that incorporate (1) written AML compliance policies, procedures, and controls; (2) an independent audit review; (3) the designation of an individual to assure day-to-day compliance; and (4) training for appropriate personnel. Over the years, these requirements have evolved into an important tool to help a number of regulatory and law enforcement agencies detect money laundering, drug trafficking, terrorist financing, and other financial crimes. The regulation and enforcement of BSA involves several different federal agencies, including FinCEN, the federal banking regulators—FDIC, Federal Reserve, NCUA, and OCC—DOJ, and SEC. FinCEN oversees the administration of BSA, has overall authority for enforcing compliance with its requirements and implementing regulations, and also has the authority to enforce the act, primarily through civil money penalties. BSA/AML examination authority has been delegated to the federal banking regulators, among others. The banking regulators use this authority and their independent authorities to examine entities under their supervision, including national banks, state member banks, state nonmember banks, thrifts, and credit unions, for compliance with applicable BSA/AML requirements and regulations. Under these independent prudential authorities, they may also take enforcement actions independently or concurrently for violations of BSA/AML requirements and assess civil money penalties against financial institutions and individuals. The authority to examine broker-dealers and investment companies (mutual funds) for compliance with BSA and its implementing regulations has been delegated to SEC, and SEC has independent authority to take related enforcement actions. DOJ’s Criminal Division develops, enforces, and supervises the application of all federal criminal laws except those specifically assigned to other divisions, among other responsibilities. The division and the 93 U.S. Attorneys have the responsibility for overseeing criminal matters as well as certain civil litigation. With respect to BSA/AML regulations, DOJ may pursue investigations of financial institutions and individuals for both civil and criminal violations that may result in dispositions including fines, penalties, or the forfeiture of assets. In the cases brought against financial institutions that we reviewed, the assets were either cash or financial instruments. Under the statutes and regulations that guide the assessment amounts for fines, penalties, and forfeitures, each federal agency has the discretion to consider the financial institution’s cooperation and remediation of their BSA/AML internal controls, among other factors. The FCPA contains both antibribery and accounting provisions that apply to issuers of securities, including financial institutions. The antibribery provisions prohibit issuers, including financial institutions, from making corrupt payments to foreign officials to obtain or retain business. The accounting provisions require issuers to make and keep accurate books and records and to devise and maintain an adequate system of internal accounting controls, among other things. SEC and DOJ are jointly responsible for enforcing the FCPA and have authority over issuers, their officers, directors, employees, stockholders, and agents acting on behalf of the issuer for violations, as well as entities that violate the FCPA. Both SEC and DOJ have civil enforcement authority over the FCPA’s antibribery provisions as well as over accounting provisions that apply to issuers. DOJ also has criminal enforcement authority. Generally, financial sanctions programs create economic penalties in support of U.S. policy priorities, such as countering national security threats. Sanctions are authorized by statute or executive order, and may be comprehensive (against certain countries) or more targeted (against individuals and groups such as regimes, terrorists, weapons of mass destruction proliferators, and narcotics traffickers). Sanctions are used to, among other things, block assets, impose trade embargos, prohibit trade and investment with some countries, and bar economic and military assistance to certain regimes. For example, financial institutions are prohibited from using the U.S. financial system to make funds available to designated individuals or banks and other entities in countries targeted by sanctions. Financial institutions are required to establish compliance and internal audit procedures for detecting and preventing violations of U.S. sanction laws and regulations, and are also required to follow OFAC reporting requirements. Financial institutions are to implement controls consistent with their risk assessments, often using systems that identify designated individuals or entities and automatically escalate related transfers for review and disposition. Institutions may also have a dedicated compliance officer and an officer responsible for overseeing blocked funds, compliance training, and in-depth annual audits of each department in the bank. Treasury, DOJ, and federal banking regulators all have roles in implementing U.S. sanctions programs requirements relevant to financial institutions. Specifically, Treasury has primary responsibility for administering and enforcing financial sanctions, developing regulations, conducting outreach to domestic and foreign financial regulators and financial institutions, identifying sanctions violations, and assessing the effects of sanctions. Treasury and DOJ also enforce sanctions regulations by taking actions against financial institutions for violations of sanctions laws and regulations, sometimes in coordination with the federal and state banking regulators. As part of their examinations of financial institutions for BSA/AML compliance, banking regulators and SEC also review financial institutions to assess their compliance programs for sanction laws and regulations. Treasury and DOJ maintain funds and accounts for fines, penalties, and forfeitures that are collected. Expenditure of these funds is guided by statute, and Treasury and DOJ are permitted to use the revenue from their funds to pay for expenses associated with forfeiture activities. Treasury administers and maintains the Treasury General Fund and TFF. Treasury General Fund receipt accounts hold all collections that are not earmarked by law for another account for a specific purpose or presented in the President’s budget as either governmental (budget) or offsetting receipts. It includes taxes, customs duties, and miscellaneous receipts. The TFF is a multidepartmental fund that is the receipt account for agencies participating in the Treasury Forfeiture Program (see table 1).The program has four primary goals: (1) to deprive criminals of property used in or acquired through illegal activities; (2) to encourage joint operations among federal, state, and local law enforcement agencies, as well as foreign countries; (3) to strengthen law enforcement; and (4) to protect the rights of the individual. Treasury’s Executive Office for Asset Forfeiture is responsible for the management and oversight of the TFF. DOJ administers and maintains deposit accounts for the penalties and forfeitures it assesses, including the AFF and the Crime Victims Fund. The AFF is the receipt account for forfeited cash and proceeds from the sale of forfeited assets generated by the Justice Asset Forfeiture Program (see table 2). A primary goal of the Justice Asset Forfeiture Program is preventing and reducing crime through the seizure and forfeiture of assets that were used in or acquired as a result of criminal activity. The Crime Victims Fund is the receipt account for criminal fines and special assessments collected from convicted federal offenders, as well as federal revenues from certain other sources. It was established to provide assistance and grants for victim services throughout the United States. In addition, the Consolidated Appropriations Act, 2016, established a new forfeiture fund—the United States Victims of State Sponsored Terrorism Fund—to receive the proceeds of forfeitures resulting from sanctions- related violations. Specifically, the fund will be used to receive proceeds of forfeitures related to violations of the International Emergency Economic Powers Act and the Trading with the Enemy Act, as well as other offenses related to state sponsors of terrorism. Since 2009, financial institutions have been assessed about $12 billion in fines, penalties, and forfeitures for violations of BSA/AML, FCPA, and U.S. sanctions program requirements. Specifically, from January 2009 through December 2015, federal agencies assessed about $5.2 billion for BSA violations, $27 million for FCPA violations, and about $6.8 billion for violations of U.S. sanctions program requirements. Of the $12 billion, federal agencies have collected all of these assessments, except for about $100 million. The majority of the $100 million that was uncollected was assessed in 2015 and is either subject to litigation, current deliberations regarding the status of the collection efforts, or bankruptcy proceedings. From January 2009 to December 2015, DOJ, FinCEN, and federal financial regulators (the Federal Reserve, FDIC, OCC, and SEC), assessed about $5.2 billion and collected about $5.1 billion in penalties, fines, and forfeitures for various BSA violations. Financial regulators assessed a total of about $1.4 billion in penalties for BSA violations for which they were responsible for collecting, and collected almost all of this amount (see fig. 1). The amounts assessed by the financial regulators and Treasury are guided by statute and based on the severity of the violation. Based on our review of regulators’ data and enforcement orders, the federal banking regulators assessed penalties for the failure to implement or develop adequate BSA/AML programs, and failure to identify or report suspicious activity. Of the $1.4 billion, one penalty (assessed by OCC) accounted for almost 35 percent ($500 million). This OCC enforcement action was taken against HSBC Bank USA for having a long-standing pattern of failing to report suspicious activity in violation of BSA and its underlying regulations and for the bank’s failure to comply fully with a 2010 cease-and-desist order. Financial regulators assess penalties for BSA violations both independently and concurrently with FinCEN. In a concurrent action, FinCEN will jointly assess a penalty with the regulator and deem the penalty satisfied with a payment to the regulator. Out of the $1.4 billion assessed, $651 million was assessed concurrently with FinCEN to 13 different financial institutions. FinCEN officials told us that it could take enforcement actions independently, but tries to take actions concurrently with regulators to mitigate duplicative penalties. During this period, SEC also assessed about $16 million in penalties and disgorgements against broker-dealers for their failure to comply with the record-keeping and retention requirements under BSA. SEC’s penalties ranged from $25,000 to $10 million—which included a $4.2 million disgorgement. As of December 2015, SEC had collected about $9.4 million of the $16 million it has assessed. In the case resulting in a $10 million assessment, Oppenheimer & Company Inc. failed to file Suspicious Activity Reports on an account selling and depositing large quantities of penny stocks. In addition, FinCEN assessed about $108 million in penalties that it was responsible for collecting. Based on our analysis, almost all of the $108 million was assessed in 2015 of which $9.5 million has been collected as of December 2015. Of the $108 million FinCEN assessed, three large penalties totaling $93 million—including a $75 million penalty—were assessed in 2015 and, according to FinCEN officials, have not been collected due to litigation, current deliberations regarding the status of the collection efforts, or pending bankruptcy actions. FinCEN’s penalty assessments amounts ranged from $5,000 to $75 million for this period and are guided by statute and regulations, the severity of the BSA violation, and other factors. For example, in a case resulting in a $75 million penalty assessment against Hong Kong Entertainment (Overseas) Investments, FinCEN found that the casino’s weak AML internal controls led to the concealment of large cash transactions over a 4-year period. We found that institutions were assessed penalties by FinCEN for a lack of AML internal controls, failure to register as a money services business, or failure to report suspicious activity as required. Through fines and forfeitures, DOJ, in cooperation with other law enforcement agencies and often through the federal court system, collected about $3.6 billion from financial institutions from January 2009 through December 2015 (see fig. 2). Almost all of this amount resulted from forfeitures, while about $1 million was from fines. As of December 2015, $1.2 million had not been collected in the cases we reviewed. These assessments consisted of 12 separate cases and totaled about 70 percent of all penalties, fines, and forfeitures assessed against financial institutions for BSA violations. DOJ’s forfeitures ranged from about $240,000 to $1.7 billion, and six of the forfeitures were at least $100 million. According to DOJ officials, the amount of forfeiture is typically determined by the amount of the proceeds of the illicit activity. In 2014, DOJ assessed a $1.7 billion forfeiture—the largest penalty related to a BSA violation—against JPMorgan Chase Bank. DOJ cited the bank for its failure to detect and report the suspicious activities of Bernard Madoff. The bank failed to maintain an effective anti-money-laundering program and report suspicious transactions in 2008, which contributed to their customers losing about $5.4 billion in Bernard Madoff’s Ponzi scheme. For the remaining cases, financial institutions were generally assessed fines and forfeitures for failures in their internal controls over AML programs and in reporting suspicious activity. From January 2009 through December 2015, SEC collected approximately $27 million in penalties and disgorgements from two financial institutions for FCPA violations. SEC assessed $10.3 million in penalties, $13.6 million in disgorgements, and $3.3 million in interest combined for the FCPA violations. The penalties were assessed for insufficient internal controls and FCPA books and records violations. SEC officials stated the fact that they had not levied more penalties against financial institutions for FCPA violations than they had against other types of institutions may be due, in part, to financial institutions being subject to greater regulatory oversight than other industries. While DOJ and SEC have joint responsibility for enforcing FCPA requirements, DOJ officials stated that they did not assess any penalties against financial institutions during the period of our review. From January 2009 through December 2015, OFAC independently assessed $301 million in penalties against financial institutions for sanctions programs violations.The $301 million OFAC assessed was comprised of 47 penalties, with penalty amounts ranging from about $8,700 to $152 million. Of the $301 million, OFAC has collected about $299 million (see fig. 3). OFAC’s enforcement guidelines provide the legal framework for analyzing apparent violations. Some of the factors which determine the size of a civil money penalty include the sanctions program at issue and the number of apparent violations and their value. For example, OFAC assessed Clearstream Banking a $152 million penalty because it made securities transfers for the central bank of a sanctioned country. DOJ, along with participating Treasury offices and other law enforcement partners, assessed and enforced criminal and civil forfeitures and fines totaling about $5.7 billion for the federal government for sanctions programs violations. This amount was the result of eight forfeitures that also included two fines. Of the $5.7 billion collected for sanctions programs violations, most of this amount was collected from one financial institution—BNP Paribas. In total, BNP Paribas was assessed an $8.8 billion forfeiture and a $140 million criminal fine in 2014 for willfully conspiring to commit violations of various sanctions laws and regulations. BNP Paribas pleaded guilty to moving more than $8.8 billion through the U.S. financial system on behalf of sanctioned entities from 2004 to 2012. Of the $8.8 billion forfeited, $3.8 billion was collected by Treasury’s Executive Office for Asset Forfeiture, with the remainder apportioned among participating state and local agencies. In addition to BNP Paribas, DOJ and OFAC assessed fines and forfeitures against other financial institutions for similar violations, including processing transactions in violation of the International Emergency Economic Powers Act, OFAC regulations, and the Trading with the Enemy Act. From January 2009 through December 2015, the Federal Reserve independently assessed and collected about $837 million in penalties from six financial institutions for U.S. sanctions programs violations. The Federal Reserve assessed its largest penalty for $508 million against BNP Paribas for having unsafe and unsound practices that failed to prevent the concealing of payment information of financial institutions subject to OFAC regulations. It was assessed as part of a global settlement with DOJ for concealing payment information of a financial institution subject to OFAC regulations. Federal Reserve officials stated that the remaining assessed penalties related to OFAC regulations were largely for similar unsafe and unsound practices. FinCEN and financial regulators have processes in place for receiving penalty payments from financial institutions—including for penalties assessed for the covered violations—and for depositing these payments. These payments are deposited into accounts in Treasury’s General Fund and are used for the general support of federal government activities. From January 2009 through December 2015, about $2.7 billion was collected from financial institutions for the covered violations and deposited into Treasury General Fund accounts. DOJ and Treasury also have processes in place for collecting forfeitures, fines, and penalties related to BSA and sanctions violations. Depending on which agency seizes the assets, forfeitures are generally deposited into two accounts— either DOJ’s AFF or Treasury’s TFF. From January 2009 through December 2015, about $3.2 billion was deposited into the AFF and $5.7 billion into the TFF, of which $3.8 billion related to a sanctions case was rescinded in the fiscal year 2016 appropriation legislation. Funds from the AFF and TFF are primarily used for program expenses, payments to third parties—including the victims of the related crimes—and equitable sharing payments to law enforcement agencies that participated in the efforts resulting in forfeitures. For the cases in our review, as of December 2015, DOJ and Treasury had distributed about $1.1 billion in payments to law enforcement agencies and approximately $2 billion is planned to be distributed to victims of crimes. The remaining funds from these cases are subject to general rescissions to the AFF and TFF or may be used for program or other law enforcement expenses. DOJ officials stated that DOJ determines criminal fines on a case-by-case basis, in consideration of the underlying criminal activity and in compliance with relevant statutes. FinCEN and financial regulators deposit collections of penalties assessed against financial institutions—including for the covered violations—into Treasury’s General Fund accounts (see fig. 4). FinCEN deposits the penalty payments it receives in accounts in Treasury’s General Fund. First, FinCEN sends financial institutions a signed copy of the final consent order related to the enforcement action it has taken along with instructions on how and when to make the penalty payment. Then, Treasury’s Bureau of Fiscal Service (BFS) collects payments from financial institutions, typically through a wire transfer. OFAC officials explained that BFS also collects and tracks, on behalf of OFAC, payments for civil money penalties that OFAC assesses. BFS periodically notifies OFAC via e-mail regarding BFS’s receipt of payments of the assessed civil monetary penalties. FinCEN officials said that its Financial Management team tracks the collection of their penalties by comparing the amount assessed to Treasury’s Report on Receivables, which shows the status of government-wide receivables and debt collection activities and is updated monthly. Specifically, FinCEN staff compares their penalty assessments with BFS’s collections in Treasury’s Report on Receivables to determine if a penalty payment has been received or is past due. Once Treasury’s BFS receives payments for FinCEN- and OFAC-assessed penalties, BFS staff deposits the payments into the appropriate Treasury General Fund accounts. Financial regulators also have procedures for receiving and depositing these collections into Treasury’s General Fund accounts, as the following examples illustrate: SEC keeps records of each check, wire transfer, or online payment it receives, along with a record of the assessed amount against the financial institution, the remaining balance, and the reasons for the remaining balance, among other details related to the penalty. For collections we reviewed from January 2009 to December 2015 for BSA and FCPA violations, SEC had deposited all of them into a Treasury General Fund receipt account. Upon execution of an enforcement action involving a penalty, the Enforcement and Compliance Division within OCC sends a notification of penalties due to OCC’s Office of Financial Management. When the Office of Financial Management receives a payment for a penalty from a financial institution, it compares the amount with these notifications. The Office of Financial Management records the amount received and sends a copy of the supporting documentation (for example, a wire transfer or check) to the Enforcement and Compliance Division. OCC holds the payment in a civil money penalty account—an account that belongs to and is managed by OCC—before it deposits the payment in a Treasury General Fund receipt account on a monthly basis. The Federal Reserve directs financial institutions to wire their penalty payment to the Federal Reserve Bank of Richmond (FRBR). The Federal Reserve then verifies that the payment has been made in the correct amount to FRBR, and when it is made, FRBR distributes the penalty amount received to a Treasury General Fund receipt account. Federal Reserve officials explained that when they send the penalty to Treasury, they typically e-mail Treasury officials to verify that they have received the payment. They noted that when Treasury officials receive the penalty payment, they send a verification e-mail back to the Federal Reserve. According to officials, to keep track of what is collected and sent to the Treasury General Fund, FRBR retains statements that document both the collection and transfer of the penalty to a Treasury General Fund receipt account. FDIC has similar processes in place for collecting penalties related to BSA violations. When enforcement orders are executed, financial institutions send all related documentation (the stipulation for penalty payment, the order, and the check in the amount of the penalty payment) to FDIC’s applicable regional office Legal Division staff, which in turn sends the documentation to Legal Division staff in Washington, D.C. If the payment is wired, FDIC compares the amount wired to the penalty amount to ensure that the full penalty is paid. If the payment is a check, FDIC officials make sure the amount matches the penalty, document receipt of the payment in an internal payment log, and then send the check to FDIC’s Department of Finance. Once a quarter, FDIC sends penalty payments it receives to a Treasury General Fund receipt account. In addition to the processes we discuss in this report for penalty collections, SEC, Federal Reserve, OCC, and FDIC all have audited financial statements that include reviews of general internal controls over agency financial reporting, including those governing collections. From January 2009 through December 2015, FinCEN, OFAC, and financial regulators collected in total about $2.6 billion from financial institutions for the covered violations but they did not retain any of the penalties they collected. Instead, the collections were deposited in Treasury General Fund accounts and used to support various federal government activities. Officials from these agencies stated that they have no discretion over the use of the collections, which must be transmitted to the Treasury. Once agencies deposit their collections into the Treasury General Fund, they are unable to determine what subsequently happens to the money, since it is commingled with other deposits. Treasury Office of Management officials stated that the collections deposited into the General Fund accounts are used according to the purposes described in Congress’s annual appropriations. More specifically, once a penalty collection is deposited into a receipt account in the Treasury General Fund, only an appropriation by Congress can begin the process of spending these funds. Appropriations from Treasury General Fund accounts are amounts appropriated by law for the general support of federal government activities. The General Fund Expenditure Account is an appropriation account established to record amounts appropriated by law for the subsequent expenditure of these funds, and includes spending from both annual and permanent appropriations. Treasury Office of Management officials explained that the Treasury General Fund has a general receipt account that receives all of the penalties that regulators and Treasury agencies collect for BSA, FCPA, and sanctions violations. Treasury officials explained that to ensure that the proper penalty amounts are collected, Treasury requires agency officials to reconcile the amount of deposits recorded in their general ledger to corresponding amounts recorded in Treasury’s government- wide accounts. If Treasury finds a discrepancy between the General Ledger and the government-wide accounts, it sends the specific agency a statement asking for reconciliation. Treasury’s Financial Manual provides agencies with guidance on how to reconcile discrepancies and properly transfer money to the general receipt account. Treasury officials explained that they cannot associate a penalty collected for a specific violation with an expense from the General Fund as collections deposited in General Fund accounts are comingled. Forfeitures—including those from financial institutions for violations of BSA/AML and U.S. sanctions programs requirements—are deposited into three accounts depending in part on the agency seizing the assets (DOJ and other law enforcement agencies use the AFF, Treasury and the Department of Homeland Security use the TFF, and U.S. Postal Inspection Service uses the Postal Service Fund). In the cases we reviewed, financial institutions forfeited either cash or financial instruments, which were generally deposited into the AFF or the TFF. Figure 5 shows the processes that govern the seizure and forfeiture of assets for the Justice Asset Forfeiture Program and the Treasury Forfeiture Program. The Justice Asset Forfeiture Program and the Treasury Forfeiture Program follow similar forfeiture processes. Under the Justice Asset Forfeiture Program, a DOJ investigative agency seizes an asset (funds in the cases we reviewed), and the asset is entered into DOJ’s Consolidated Asset Tracking System. The asset is then transferred to the U.S. Marshals Service for deposit into the Seized Asset Deposit Fund. The U.S. Attorney’s Office or the seizing agency must provide notice to interested parties and conduct Internet publication prior to entry of an administrative declaration of forfeiture or a court-ordered final order of forfeiture. Once the forfeiture is finalized, the seizing agency or the U.S. Attorney’s Office enters the forfeiture information into the Consolidated Asset Tracking System. U.S. Marshals Service subsequently transfers the asset from the Seized Asset Deposit Fund to the AFF. Similarly, the asset forfeiture process for the Treasury Forfeiture Program involves a Department of Homeland Security or Treasury investigative agency seizing the asset (funds, in the cases we reviewed). The seizing agency takes custody of the asset, enters the case into their system of record, and transfers the asset to the Treasury Suspense Account. Once forfeiture is final, the seizing agency subsequently requests that Treasury’s Executive Office for Asset Forfeiture staff transfer the asset from the Treasury Suspense Account to the TFF. According to Treasury’s Executive Office for Asset Forfeiture staff, each month, TFF staff compares deposits in the TFF with records from seizing agencies to review whether the amounts are accurately recorded. From January 2009 through December 2015, for the cases we reviewed, nine financial institutions forfeited about $3.2 billion in funds through the Justice Asset Forfeiture Program due to violations of BSA/AML and U.S. sanctions programs requirements. AFF expenditures are governed by the law establishing the AFF, as we have previously reported. Specifically, the AFF is primarily used to pay the forfeiture program’s expenses in three major categories: 1. program operations expenses in 13 expenditure categories such as asset management and disposal, storage and destruction of drugs, and investigative expenses leading to a seizure; 2. payments to third parties, including payments to satisfy interested parties such as owners or lien holders, as well as the return of funds to victims of crime; and 3. equitable sharing payments to state and local law enforcement agencies that participated in law enforcement efforts resulting in the forfeitures. In addition, after DOJ obligates funds to cover program expenses, any AFF funds remaining at the end of a fiscal year may be declared an excess unobligated balance and used for any of DOJ’s authorized purposes, including helping to cover rescissions. Court documents and DOJ data indicate that forfeitures from the Justice Asset Forfeiture Program cases we reviewed will be used to compensate victims and have been used to make equitable sharing payments. Although DOJ data showed that DOJ has not yet remitted payments to any victims in the cases we reviewed, court documents and comments from DOJ officials indicated that approximately $2 billion of the forfeited funds deposited in the AFF would be remitted to victims of fraud. For example, according to Asset Forfeiture and Money Laundering Section officials, DOJ has set up the Madoff Victim Fund in part from the related $1.7 billion forfeited by JPMorgan Chase to collect and review victim claims related to the Ponzi scheme operated by Bernard Madoff. DOJ intends to distribute the funds to eligible victims of Madoff’s fraud. Additionally, DOJ data for seven cases showed that it had made approximately $660 million in equitable sharing payments. From January 2009 through December 2015, for the cases we reviewed, seven financial institutions forfeited about $5.7 billion in funds due to violations of BSA/AML and U.S. sanctions programs requirements through the Treasury Forfeiture Program. These forfeitures have been deposited in the TFF and can be used for certain purposes as specified by law. In the cases we identified, all seized and forfeited assets were cash. TFF expenditures are governed by the law establishing the TFF and, as we have previously reported, are primarily used to pay the forfeiture program’s expenses in major categories including program operation expenses, payments to third parties including crime victims, equitable sharing payments to law enforcement partners, and other expenses. Of the $5.7 billion contained in the TFF, the $3.8 billion paid by BNP Paribas as part of the bank’s settlement with DOJ was permanently rescinded from the TFF and is unavailable for obligation. The remaining funds, if not subject to general rescissions, can be used for a variety of purposes. As of December 2015, DOJ was considering using approximately $310 million in TFF forfeitures for victim compensation and, according to Treasury officials, Treasury had made approximately $484 million in equitable sharing payments and obligated a further $119 million for additional equitable sharing payments. As with the AFF, after Treasury obligates funds to cover program expenses, any TFF funds remaining at the end of a fiscal year, if not rescinded, may be declared an excess unobligated balance. These funds can be used to support a variety of law enforcement purposes, such as enhancing the quality of investigations. DOJ has litigated court cases against financial institutions for criminal violations of BSA/AML and U.S. sanctions programs requirements resulting in criminal fines ordered by the federal courts. According to DOJ officials, DOJ determines criminal fines on a case-by-case basis, in consideration of the underlying criminal activity and in compliance with relevant statutes. Court documents, such as court judgments and plea agreements, communicate the amount of the criminal fine to the financial institution. DOJ U.S. Attorneys’ Offices are primarily responsible for collecting criminal fines. They begin the collection process by issuing a demand letter to the financial institution. Upon receipt of the demand letter, the financial institution makes the payment to the Clerk of the Courts. According to officials from the Administrative Office of the U.S. Courts, the Clerk of the Courts initially collects the payments which are deposited into a Treasury account for DOJ’s Crime Victims Fund. Funds in the Crime Victims Fund can be used for authorized purposes including support of several state and federal crime victim assistance–related grants and activities, among other things. DOJ officials told us that all criminal fines, with a few exceptions, are deposited into the Crime Victims Fund. This may include criminal fines related to violations of BSA/AML requirements and U.S. sanctions regulations. In the cases we identified from January 2009 through December 2015, the court ordered about $141 million in criminal fines for violations of BSA/AML and U.S. sanctions programs requirements. The $140 million fine assessed against BNP Paribas was deposited into the Crime Victims Fund. Additionally, in the cases we reviewed, DOJ had litigated a court case against a financial institution for civil violations of U.S. sanctions programs requirements which resulted in a civil penalty. The civil penalty collection process is similar to the criminal fine collection process, but the financial institution makes the payment to DOJ’s accounts in the Treasury General Fund instead of to the Clerk of the Courts. As previously discussed in this report, monies in the Treasury General Fund are used according to the purposes described in Congress’s annual appropriations. Civil penalties are also eligible to be assessed up to a 3 percent fee for disbursement to DOJ’s Three Percent Fund, which is primarily used to offset DOJ expenses related to civil debt collection. Of the cases we identified, one case involved a civil penalty of $79 million against Commerzbank for violating U.S. sanctions program requirements. DOJ collected the Commerzbank civil penalty, deposited it into DOJ’s accounts in the Treasury General Fund, and assessed a nearly 3 percent fee (about $2.3 million) that was deposited into the Three Percent Fund. We provided a draft of this report to Treasury, DOJ, SEC, OCC, FDIC, and the Federal Reserve for review and comment. Treasury, DOJ, OCC, FDIC, and the Federal Reserve provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to Treasury, DOJ, SEC, OCC, FDIC, and the Federal Reserve, and interested congressional committees and members. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Lawrance Evans at (202) 512-8678 or evansl@gao.gov or Diana C. Maurer at (202) 512-9627 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. This report describes the fines, penalties, and forfeitures federal agencies have collected from financial institutions for violations of Bank Secrecy Act and related anti-money-laundering requirements (BSA/AML), Foreign Corrupt Practices Act of 1977 (FCPA), and U.S. sanctions programs requirements. Specifically, our objectives in this report were to describe (1) the amount of fines, penalties, and forfeitures that the federal government has collected for these violations from January 2009 through December 2015; and (2) the process for collecting these funds and the purposes for which they are used. To address these objectives, we reviewed prior GAO and Office of the Inspector General reports and relevant laws and regulations. We also reviewed data and documentation and interviewed officials from key agencies responsible for implementing and enforcing BSA/AML, FCPA, and U.S. sanctions programs requirements. The agencies and offices included in this review were: (1) offices within the Department of the Treasury’s (Treasury) Office of Terrorism and Financial Intelligence, including officials from the Financial Crimes Enforcement Network (FinCEN), Office of Foreign Assets Control (OFAC), and Treasury Executive Office for Asset Forfeiture, and Treasury’s Office of Management; (2) Securities and Exchange Commission (SEC); (3) the federal banking regulators—Board of Governors of the Federal Reserve System (Federal Reserve), Federal Deposit Insurance Corporation (FDIC), National Credit Union Administration (NCUA), and the Office of the Comptroller of the Currency (OCC); and (4) the Department of Justice (DOJ). To respond to our first objective, we identified and analyzed these agencies’ data on enforcement actions taken against financial institutions that resulted in fines, penalties, or forfeitures for violations of BSA/AML, FCPA, and U.S. sanctions programs requirements. Specifically, we analyzed publicly available data from January 2009 through December 2015 on penalties assessed against financial institutions by the Federal Reserve, FDIC, OCC, SEC, and the Financial Crimes Enforcement Network (FinCEN), a bureau within Treasury, for violations of BSA/AML requirements. NCUA officials we spoke with explained that they had not assessed any penalties against financial institutions for violations of BSA/AML requirements from January 2009 through December 2015. FDIC and SEC provided us with a list of enforcement actions they took for BSA/AML violations since 2009, as we were not able to identify all of their actions through their publicly available data. We also reviewed Federal Reserve data on penalties for violations of U.S. sanctions programs requirements and data that SEC provided on FCPA violations. In addition, we reviewed enforcement actions listed on Treasury’s Office of Foreign Assets Control (OFAC) website to identify penalties assessed against financial institutions for violations of U.S. sanctions programs requirements enforced by OFAC. To identify enforcement actions taken against financial institutions from the actions listed on OFAC’s website, we applied Treasury’s definition of financial institutions, which covers regulated entities in the financial industry. To identify criminal cases against financial institutions for violations of BSA and sanctions-related requirements, we reviewed press releases from DOJ’s Asset Forfeiture and Money Laundering Section, associated court documents, and enforcement actions taken against financial institutions from the actions listed on OFAC’s website (see table 3 for a list of these cases). We developed this approach in consultation with DOJ officials as their data system primarily tracks assets forfeited by the related case, which can include multiple types of violations, rather than by a specific type of violation, such as BSA or sanctions-related violations. Therefore, this report does not cover the entire universe of such criminal cases as they may not have all been publicized through this channel. However, this approach does include key cases for the period under our review that involved large amounts of forfeitures. We obtained data from DOJ’s Consolidated Asset Tracking System to determine the amounts forfeited for these cases, and verified any Treasury-related data in DOJ’s system by obtaining information from the Treasury Executive Office for Asset Forfeiture. DOJ had not brought any criminal cases against financial institutions for violations of FCPA. We assessed the reliability of the data we used in this report by reviewing prior GAO assessments of these data, interviewing knowledgeable agency officials, and reviewing relevant documentation, such as agency enforcement orders for the assessments. To verify that these amounts had been collected, we requested verifying documentation from agencies confirming that these assessments had been collected, and also obtained and reviewed documentation for a sample of the data to verify that the amount assessed matched the amount collected. As a result, we determined that these data were sufficiently reliable for our purposes. We also assessed the reliability of the DOJ data fields we reported on by reviewing prior GAO and DOJ evaluations of these data and interviewing knowledgeable officials from DOJ. We determined that these data were also sufficiently reliable for our report. To respond to our second objective—to describe how funds for violations of BSA/AML, FCPA, and U.S. sanctions programs requirements were collected—we identified and summarized documentation of the various steps and key agency internal controls for collection processes, such as procedures for how financial institutions remit payments. We also obtained documentation, such as statements documenting receipt of a penalty payment, for a sample of penalties. We interviewed officials from each agency about the process used to collect payments for assessed fines or penalties and, for relevant agencies, the processes for collecting cash and assets for forfeitures and where funds were deposited. To describe how these collections were used, we reviewed documentation on the types of expenditures that can be authorized from the accounts and funds they are deposited in. Specifically, we obtained documentation on the authorized or allowed expenditures for accounts in the Treasury General Fund, Treasury Forfeiture Fund (TFF), and DOJ’s Assets Forfeiture Fund (AFF) and Crime Victims Fund. We also reviewed relevant GAO and Office of Inspector General reports, and laws governing the various accounts. We conducted this performance audit from July 2015 to March 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contacts named above, Allison Abrams (Assistant Director), Tarek Mahmassani (Analyst-in-Charge), Bethany Benitez, Chloe Brown, Emily R. Chalmers, Chuck Fox, Tonita Gillich, Thomas Hackney, Valerie Kasindi, Dawn Locke, Jeremy Manion, Joshua Miller, John Mingus, and Jena Sinkfield made significant contributions to this report.
Over the last few years, billions of dollars have been collected in fines, penalties, and forfeitures assessed against financial institutions for violations of requirements related to financial crimes. These requirements are significant tools that help the federal government detect and disrupt money laundering, terrorist financing, bribery, corruption, and violations of U.S. sanctions programs. GAO was asked to review the collection and use of these fines, penalties, and forfeitures assessed against financial institutions for violations of these requirements—specifically, BSA/AML, FCPA, and U.S. sanctions programs requirements. This report describes (1) the amounts collected by the federal government for these violations, and (2) the process for collecting these funds and the purposes for which they are used. GAO analyzed agency data, reviewed documentation on agency collection processes and on authorized uses of the funds in which collections are deposited, and reviewed relevant laws. GAO also interviewed officials from Treasury (including the Financial Crimes Enforcement Network and the Office of Foreign Assets Control), Securities and Exchange Commission, Department of Justice, and the federal banking regulators. GAO is not making recommendations in this report. Since 2009, financial institutions have been assessed about $12 billion in fines, penalties, and forfeitures for violations of Bank Secrecy Act/anti-money- laundering regulations (BSA/AML), Foreign Corrupt Practices Act of 1977 (FCPA), and U.S. sanctions programs requirements by the federal government. Specifically, GAO found that from January 2009 to December 2015, federal agencies assessed about $5.2 billion for BSA/AML violations, $27 million for FCPA violations, and about $6.8 billion for violations of U.S. sanctions program requirements. Of the $12 billion, federal agencies have collected all of these assessments, except for about $100 million. Collections of Fines, Penalties, and Forfeitures from Financial Institutions for Violations of Bank Secrecy Act/Anti-Money Laundering, Foreign Corrupt Practices Act, and U.S. Sanctions Programs Requirements, Assessed in January 2009–December 2015 Agencies have processes for collecting payments for violations of BSA/AML, FCPA, and U.S. sanctions programs requirements and these collections can be used to support general government and law enforcement activities and provide payments to crime victims. Components within the Department of the Treasury (Treasury) and financial regulators are responsible for initially collecting penalty payments, verifying that the correct amount has been paid, and then depositing the funds into Treasury's General Fund accounts, after which the funds are available for appropriation and use for general support of the government. Of the approximately $11.9 billion collected, about $2.7 billion was deposited into Treasury General Fund accounts. The BSA and U.S. sanctions-related criminal cases GAO identified since 2009 resulted in the forfeiture of almost $9 billion through the Department of Justice (DOJ) and Treasury. Of this amount, about $3.2 billion was deposited into DOJ's Asset Forfeiture Fund (AFF) and $5.7 billion into the Treasury Forfeiture Fund (TFF), of which $3.8 billion related to a sanctions case was rescinded in fiscal year 2016 appropriations legislation. Funds from the AFF and TFF are primarily used for program expenses, payments to third parties, including the victims of the related crimes, and payments to law enforcement agencies that participated in the efforts resulting in forfeitures. As of December 2015, DOJ and Treasury had distributed about $1.1 billion to law enforcement agencies and about $2 billion was planned for distribution to crime victims. Remaining funds from these cases are subject to general rescissions to the TFF and AFF or may be used for program or other law enforcement expenses.
To improve federal efforts to assist state and local personnel in preparing for domestic terrorist attacks, H.R. 525 would create a single focal point for policy and coordination—the President’s Council on Domestic Terrorism Preparedness—within the Executive Office of the President. The new council would include the President, several cabinet secretaries, and other selected high-level officials. An Executive Director with a staff would collaborate with executive agencies to assess threats; develop a national strategy; analyze and prioritize governmentwide budgets; and provide oversight of implementation among the different federal agencies. In principle, the creation of the new council and its specific duties appear to implement key actions needed to combat terrorism that we have identified in previous reviews. Following is a discussion of those actions, executive branch attempts to implement them, and how H.R. 525 would address them. In our May 2000 testimony, we reported that overall federal efforts to combat terrorism were fragmented. There are at least two top officials responsible for combating terrorism and both of them have other significant duties. To provide a focal point, the President appointed a National Coordinator for Security, Infrastructure Protection and Counterterrorism at the National Security Council. This position, however, has significant duties indirectly related to terrorism, including infrastructure protection and continuity of government operations. Notwithstanding the creation of this National Coordinator, it was the Attorney General who led interagency efforts to develop a national strategy. H.R. 525 would set up a single, high-level focal point in the President’s Council on Domestic Terrorism Preparedness. In addition, H.R. 525 would require that the new council’s executive chairman—who would represent the President as chairman—be appointed with the advice and consent of the Senate. This last requirement would provide Congress with greater influence and raise the visibility of the office. We testified in July 2000 that one step in developing sound programs to combat terrorism is to conduct a threat and risk assessment that can be used to develop a strategy and guide resource investments. Based upon our recommendation, the executive branch has made progress in implementing our recommendations that threat and risk assessments be done to improve federal efforts to combat terrorism. However, we remain concerned that such assessments are not being coordinated across the federal government. H.R. 525 would require a threat, risk, and capability assessment that examines critical infrastructure vulnerabilities, evaluates federal and applicable state laws used to combat terrorist attacks, and evaluates available technology and practices for protecting critical infrastructure against terrorist attacks. This assessment would form the basis for the domestic terrorism preparedness plan and annual implementation strategy. In our July 2000 testimony, we also noted that there is no comprehensive national strategy that could be used to measure progress. The Attorney General’s Five-Year Plan represents a substantial interagency effort to develop a federal strategy, but it lacks desired outcomes. The Department of Justice believes that their current plan has measurable outcomes about specific agency actions. However, in our view, the plan needs to go beyond this to define an end state. As we have previously testified, the national strategy should incorporate the chief tenets of the Government Performance and Results Act of 1993 (P.L. 130-62). The Results Act holds federal agencies accountable for achieving program results and requires federal agencies to clarify their missions, set program goals, and measure performance toward achieving these goals. H.R. 525 would require the new council to publish a domestic terrorism preparedness plan with objectives and priorities, an implementation plan, a description of roles of federal, state and local activities, and a defined end state with measurable standards for preparedness. In our December 1997 report, we reported that there was no mechanism to centrally manage funding requirements and to ensure an efficient, focused governmentwide approach to combat terrorism. Our work led to legislation that required the Office of Management and Budget to provide annual reports on governmentwide spending to combat terrorism. These reports represent a significant step toward improved management by providing strategic oversight of the magnitude and direction of spending for these programs. Yet we have not seen evidence that these reports have established priorities or identified duplication of effort as the Congress intended. H.R. 525 would require the new council to develop and make budget recommendations for federal agencies and the Office of Management and Budget. The Office of Management and Budget would have to provide an explanation in cases where the new council’s recommendations were not followed. The new council would also identify and eliminate duplication, fragmentation, and overlap in federal preparedness programs. In our April 2000 testimony, we observed that federal programs addressing terrorism appear in many cases to be overlapping and uncoordinated. To improve coordination, the executive branch has created organizations like the National Domestic Preparedness Office and various interagency working groups. In addition, the annual updates to the Attorney General’s Five-Year Plan track individual agencies’ accomplishments. Nevertheless, we have still noted that the multitude of similar federal programs have led to confusion among the state and local first responders they are meant to serve. H.R. 525 would require the new council to coordinate and oversee the implementation of related programs by federal agencies in accordance with the domestic terrorism preparedness plan. The new council would also make recommendations to the heads of federal agencies regarding their programs. Furthermore, the new council would provide written notification to any department that it believes is not in compliance with its responsibilities under the plan. Federal efforts to combat terrorism are inherently difficult to lead and manage because the policy, strategy, programs, and activities to combat terrorism cut across more than 40 agencies. Congress has been concerned with the management of these programs and, in addition to H.R. 525, two other bills have been introduced to change the overall leadership and management of programs to combat terrorism. On March 21, 2001, Representative Thornberry introduced H.R. 1158, the National Homeland Security Act, which advocates the creation of a cabinet-level head within the proposed National Homeland Security Agency to lead homeland security activities. On March 29, 2001 Representative Skelton introduced H.R. 1292, the Homeland Security Strategy Act of 2001, which calls for the development of a homeland security strategy developed by a single official designated by the President. In addition, several other proposals from congressional committee reports and various commission reports advocate changes in the structure and management of federal efforts to combat terrorism. These include Senate Report 106-404 to Accompany H.R. 4690 on the Departments of Commerce, Justice, and State, the Judiciary, and Related Agencies Appropriation Bill 2001, submitted by Senator Gregg on September 8, 2000; the report by the Gilmore Panel (the Advisory Panel to Assess Domestic Response Capabilities for Terrorism Involving Weapons of Mass Destruction, chaired by Governor James S. Gilmore III) dated December 15, 2000; the report of the Hart-Rudman Commission (the U.S. Commission on National Security/21st Century, chaired by Senators Gary Hart and Warren B. Rudman) dated January 31, 2001; and a report from the Center for Strategic and International Studies (Executive Summary of Four CSIS Working Group Reports on Homeland Defense, chaired by Messrs. Frank Cilluffo, Joseph Collins, Arnaud de Borchgrave, Daniel Goure, and Michael Horowitz) dated 2000. The bills and related proposals vary in the scope of their coverage. H.R. 525 focuses on federal programs to prepare state and local governments for dealing with domestic terrorist attacks. Other bills and proposals include the larger issue of homeland security that includes threats other than terrorism, such as military attacks. H.R. 525 would attempt to resolve cross-agency leadership problems by creating a single focal point within the Executive Office of the President. The other related bills and proposals would also create a single focal point for programs to combat terrorism, and some would have the focal point perform many of the same functions. For example, some of the proposals would have the focal point lead efforts to develop a national strategy. The proposals (with one exception) would have the focal point appointed with the advice and consent of the Senate. However, the various bills and proposals differ in where they would locate the focal point for overall leadership and management. The two proposed locations for the focal point are in the Executive Office of the President (like H.R. 525) or in a Lead Executive Agency. Table 1 shows various proposals regarding the focal point for overall leadership, the scope of its activities, and it’s location. Location of focal point Executive Office of the President Lead Executive Agency (National Homeland Security Agency) Lead Executive Agency (Department of Justice) Lead Executive Agency (National Homeland Security Agency) Homeland security (including domestic terrorism, maritime and border security, disaster relief and critical infrastructure activities) Homeland security (including antiterrorism and protection of territory and critical infrastructures from unconventional and conventional threats by military or other means) Domestic terrorism preparedness (crisis and consequence management) Domestic and international terrorism (crisis and consequence management) Homeland security (including domestic terrorism, maritime and border security, disaster relief, and critical infrastructure activities) Homeland Defense (including domestic terrorism and critical infrastructure protection) Based upon our analysis of legislative proposals, various commission reports, and our ongoing discussions with agency officials, each of the two locations for the focal point—the Executive Office of the President or a Lead Executive Agency—has its potential advantages and disadvantages. An important advantage of placing the position with the Executive Office of the President is that the focal point would be positioned to rise above the particular interests of any one federal agency. Another advantage is that the focal point would be located close to the President to resolve cross agency disagreements. A disadvantage of such a focal point would be the potential to interfere with operations conducted by the respective executive agencies. Another potential disadvantage is that the focal point might hinder direct communications between the President and the cabinet officers in charge of the respective executive agencies. Alternately, a focal point with a Lead Executive Agency could have the advantage of providing a clear and streamlined chain of command within an agency in matters of policy and operations. Under this arrangement, we believe that the Lead Executive Agency would have to be one with a dominant role in both policy and operations related to combating terrorism. Specific proposals have suggested that this agency could be either the Department of Justice (per Senate Report 106-404) or an enhanced Federal Emergency Management Agency (per H.R. 1158 and its proposed National Homeland Security Agency). Another potential advantage is that the cabinet officer of the Lead Executive Agency might have better access to the President than a mid-level focal point with the Executive Office of the President. A disadvantage of the Lead Executive Agency approach is that the focal point—which would report to the cabinet head of the Lead Executive Agency—would lack autonomy. Further, a Lead Executive Agency would have other major missions and duties that might distract the focal point from combating terrorism. Also, other agencies may view the focal point’s decisions and actions as parochial rather than in the collective best interest. H.R. 525 would provide the new President’s Council on Domestic Terrorism Preparedness with a variety of duties. In conducting these duties, the new council would, to the extent practicable, rely on existing documents, interagency bodies, and existing governmental entities. Nevertheless, the passage of H.R. 525 would warrant a review of several existing organizations to compare their duties with the new council’s responsibilities. In some cases, those existing organizations may no longer be required or would have to conduct their activities under the supervision of the new council. For example, the National Domestic Preparedness Office was created to be a focal point for state and local governments and has a state and local advisory group. The new council has similar duties that may eliminate the need for the National Domestic Preparedness Office. As another example, we believe the overall coordinating role of the new council may require adjustments to the coordinating roles played by the Federal Emergency Management Agency, the Department of Justice’s Office of State and Local Domestic Preparedness Support, and the National Security Council’s Weapons of Mass Destruction Preparedness Group in the policy coordinating committee on Counterterrorism and National Preparedness. In our ongoing work, we have found that there is no consensus—either in Congress, the Executive Branch, the various commissions, or the organizations representing first responders—as to whether the focal point should be in the Executive Office of the President or a Lead Executive Agency. Developing such a consensus on the focal point for overall leadership and management, determining its location, and providing it with legitimacy and authority through legislation, is an important task that lies ahead. We believe that this hearing and the debate that it engenders, will help to reach that consensus. This concludes our testimony. We would be pleased to answer any questions you may have. For future questions about this statement, please contact Raymond J. Decker, Director, Defense Capabilities and Management at (202) 512-6020. Individuals making key contributions to this statement include Stephen L. Caldwell and Krislin Nalwalk. Combating Terrorism: Observations on Options to Improve the Federal Response (GAO-01-660T, Apr. 24, 2001). Combating Terrorism: Accountability Over Medical Supplies Needs Further Improvement (GAO-01-463, Mar. 30, 2001). Combating Terrorism: Federal Response Teams Provide Varied Capabilities; Opportunities Remain to Improve Coordination (GAO-01-14, Nov. 30, 2000). Combating Terrorism: Linking Threats to Strategies and Resources (GAO/T-NSIAD-00-218, July 26, 2000). Combating Terrorism: Comments on Bill H.R. 4210 to Manage Selected Counterterrorist Programs (GAO/T-NSIAD-00-172, May 4, 2000). Combating Terrorism: How Five Foreign Countries Are Organized to Combat Terrorism (GAO/NSIAD-00-85, Apr. 7, 2000). Combating Terrorism: Issues in Managing Counterterrorist Programs (GAO/T-NSIAD-00-145, Apr. 6, 2000). Combating Terrorism: Need to Eliminate Duplicate Federal Weapons of Mass Destruction Training (GAO/NSIAD-00-64, Mar. 21, 2000). Critical Infrastructure Protection: Comprehensive Strategy Can Draw on Year 2000 Experiences (GAO/AIMD-00-1, Oct. 1, 1999). Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attack (GAO/NSIAD-99-163, Sept. 7, 1999). Combating Terrorism: Observations on Growth in Federal Programs (GAO/T-NSIAD-99-181, June 9, 1999). Combating Terrorism: Issues to Be Resolved to Improve Counterterrorist Operations (GAO/NSIAD-99-135, May 13, 1999). Combating Terrorism: Observations on Federal Spending to Combat Terrorism (GAO/T-NSIAD/GGD-99-107, Mar. 11, 1999). Combating Terrorism: Opportunities to Improve Domestic Preparedness Program Focus and Efficiency (GAO/NSIAD-99-3, Nov. 12, 1998).
This testimony discusses the Preparedness Against Domestic Terrorism Act of 2001 (H.R. 525). To improve federal efforts to help state and local personnel prepare for domestic terrorist attacks, H.R. 525 would create a single focal point for policy and coordination--the President's Council on Domestic Terrorism Preparedness--within the White House. The new council would include the President, several cabinet secretaries, and other selected high-level officials. H.R. 525 would (1) create an executive director position with a staff that would collaborate with other executive agencies to assess threats, (2) require the new council to develop a national strategy, (3) require the new council to analyze and review budgets, and (4) require the new council to oversee implementation among the different federal agencies. Other proposals before Congress would also create a single focal point for terrorism. Some of these proposals place the focal point in the Executive Office of the President and others place it in a lead executive agency. Both locations have advantages and disadvantages.
For decades, Colombia was one of Latin America’s more stable democracies and successful economies. However, by the late 1990s it had entered a period of sustained crisis due to the emerging strength of the FARC, the Army of National Liberation (ELN), and paramilitary groups (primarily, the United Self Defense Forces of Colombia or AUC) who were increasingly financing their activities through profits from illicit narcotics. These groups were assuming increasing control of the coca and opium poppy growing areas of the country through wide scale violence and human rights abuses, which affected to varying degrees each of Colombia’s 32 departments (see fig. 1). Colombia suffered a severe economic downturn in the late 1990s as its armed forces and police were unable to respond to the growing strength of these illegal armed groups, and levels of murder, kidnapping, extortion, economic sabotage, and illicit drug trafficking spiraled upward. According to State, in the 7 years prior to Plan Colombia, coca cultivation had increased by over 300 percent and opium poppy cultivation had increased by 75 percent. Despite U.S. and Colombian efforts to counter the drug-trafficking activities of these illegal armed groups, State reports that Colombia remains the source for about 90 percent of the cocaine entering the United States, and the primary source of heroin east of the Mississippi River. According to State officials, FARC and other illegal groups remain active in areas where coca and opium poppy are grown and are involved in every facet of the narcotics business from cultivation to transporting drugs to points outside Colombia. Announced by Colombian President Andres Pastrana in 1999, Plan Colombia was designed to counter the country’s drug and security crisis through a comprehensive 6-year, $7.5 billion plan linked to three objectives: (1) reduce the flow of illicit narcotics and improve security, (2) promote social and economic justice, and (3) promote the rule of law. While the latter two objectives were not specifically designed to reduce the flow of illicit narcotics and improve security, they broadly facilitate these goals by addressing some of the underlying social and economic realities that drive individuals toward the illicit drug trade and by providing a legal framework for bringing drug traffickers and terrorists to justice. As shown in figure 2, State and Defense assistance for the Colombian military and National Police has supported a counternarcotics strategy focused on reducing illicit narcotics and improving security. Central to this support have been State-led efforts to provide the Colombians with air mobility, which supports the full range of military programs and many nonmilitary programs by providing access and security in remote areas. Nonmilitary assistance efforts are implemented by USAID, Justice, and, State, which oversee a diverse range of social, economic, and justice initiatives. In January 2007, the government of Colombia announced a 6-year follow- on strategy, the PCCP. This new strategy includes the same three broad objectives as Plan Colombia. The government of Colombia has pledged to provide approximately $44 billion for PCCP. The strategy notes that a certain level of support from the international community is still essential. At the time, the United States developed a proposed funding plan of approximately $4 billion in U.S. support for PCCP for fiscal years 2007 through 2013. The government of Colombia significantly expanded the security component of Plan Colombia with its Democratic Security and Defense Policy in June 2003, which outlined a “clear, hold, and consolidate” strategy. The strategy’s main objective was to assert state control over the majority of Colombia’s national territory, particularly in areas affected by the activities of illegal armed groups and drug traffickers. Colombian officials said this new strategy will take years to fully implement. (See fig. 3.) Expanded authority approved by the U.S. Congress at about the same time allowed agencies to support this security strategy. The government of Colombia has taken a number of steps to implement this strategy, including: Increasing the size of its military and police from 279,000 in 2000 to 415,000 in 2007. Conducting a series of offensive actions against FARC under a military strategy called Plan Patriota, which began in June 2003 with efforts to clear FARC from areas surrounding Colombia’s capital, Bogotá. In mid- 2004, the military implemented a second, more ambitious phase of Plan Patriota aimed at attacking key FARC fronts encompassing the southern Colombian departments of Caquetá, Guaviare, and Meta. Based in Larandia, Joint Task Force-Omega was established in 2004 to coordinate the efforts of the Colombian Army, Air Force, and Marines in this area. Creating the Coordination Center for Integrated Government Action (CCAI) in 2004 to coordinate the delivery of military and civilian assistance in 58 targeted municipalities emerging from conflict in 11 regions throughout Colombia. An updated version of the Colombian defense strategy was released in coordination with the PCCP strategy in January 2007. Incorporating lessons learned from the 2003 strategy, this latest strategy focuses on clearing one region at a time and places a greater emphasis on consolidating military gains through coordinated civil-military assistance designed to solidify the government’s presence in previously conflictive areas by providing a range of government services to local populations. To implement this strategy, the government of Colombia has taken several actions, including focusing Joint Task Force-Omega’s efforts in La Macarena—a traditional FARC stronghold—through a new military offensive called Plan Consolidacíon. The government also developed a coordinated military and civilian plan of action called the Consolidation Plan for La Macarena, which has been in place since October 2007. As part of this plan, CCAI established a joint civil-military fusion center to coordinate military, police, economic development, and judicial activities. If successful, the approach in La Macarena is intended to serve as a model for similar CCAI efforts in 10 other regions of the country. It represents a key test of the government’s enhanced state presence strategy and a potential indicator of the long-term prospects for reducing Colombia’s drug trade by systematically re-establishing government control throughout the country. Between fiscal years 2000 and 2008, the United States has provided over $6 billion in military and nonmilitary assistance to Colombia. (See table 1.) Most State assistance for Colombia is overseen by its Bureau for International Narcotics and Law Enforcement Affairs (State/INL), though the Bureau for Political and Military Affairs is responsible for FMF and IMET funds. State/INL’s Narcotics Affairs Section (NAS) in the U.S. Embassy Bogotá oversees daily program operations. State’s Office of Aviation supports the NAS with advisors and contract personnel who are involved with the implementation of U.S. assistance provided to the Colombian Army’s Plan Colombia Helicopter Program (PCHP) and the National Police’s Aerial Eradication Program. The Military Group in the U.S. Embassy Bogotá manages both Defense counternarcotics support and State FMF and IMET funding. USAID and Justice have full-time staff based in Bogotá to oversee and manage their nonmilitary assistance programs. U.S. agencies are supported in their efforts by an extensive U.S.-funded contract workforce, which provides a range of services from aviation program support to alternative development project implementation. From the outset of Plan Colombia, Congress has stated that U.S. assistance efforts should be nationalized over time and has followed through with a number of specific reporting requirements and budget decisions to help ensure this objective is achieved. Beginning in 2004, Congress signaled that U.S. program administrators should begin the process of drawing down support for U.S. financed aviation programs in Colombia, which it noted accounted for a significant portion of U.S. assistance to Colombia. In 2005, House appropriators requested that the administration develop a multiyear strategy defining U.S. program and nationalization plans going forward under the PCCP. The administration responded in March 2006 with a report to Congress that outlined program achievements under Plan Colombia and a broad outline of planned nationalization efforts beginning with U.S. financed aviation programs. Follow-on reports issued in April 2007 and April 2008 further refined the administration’s plans by providing a proposed funding plan illustrating how U.S. assistance efforts would be reduced from 2007 through 2013 as the Colombians assume greater responsibility for programs funded and managed by the United States. Plan Colombia’s goal of reducing the cultivation, processing, and distribution of illegal narcotics by targeting coca cultivation was not achieved. Although estimated opium poppy cultivation and heroin production were reduced by about 50 percent, coca cultivation and cocaine production increased, though data from 2007 indicate that cocaine production slightly declined. Colombia’s security climate has improved as a result of progress in a number of areas, but U.S. and Colombian officials cautioned that current programs must be maintained for several years before security gains can be considered irreversible. From 2000 to 2006, estimated opium poppy cultivation and heroin production declined about 50 percent, but coca cultivation and cocaine production increased over the period. To put Colombia’s 6-year drug reduction goal in perspective, we note that although U.S. funding for Plan Colombia was approved in July 2000, many U.S.-supported programs to increase the Colombian military and police capacity to eradicate drug crops and disrupt the production and distribution of heroin and cocaine did not become operational until 2001 and later. Meanwhile, estimated illicit drug cultivation and production in Colombia continued to rise through 2001, with estimated cultivation and production declining in 2002 through 2004. However, the declines for coca cultivation and cocaine production were not sustained. In addition, the estimated flow of cocaine towards the United States from South America rose over the period. As illustrated in figure 4, estimated opium poppy cultivation and heroin production levels in 2006 were about half of what they had been in 2000. As illustrated in figure 5, coca cultivation was about 15 percent greater in 2006 than in 2000, with an estimated 157,000 hectares cultivated in 2006 compared to 136,200 hectares in 2000. State officials noted that extensive aerial and manual eradication efforts during this period were not sufficient to overcome countermeasures taken by coca farmers as discussed later in this report. U.S. officials also noted the increase in estimated coca cultivation levels from 2005 through 2007, may have been due, at least in part, to the Crime and Narcotics Centers’ decision to increase the size of the coca cultivation survey areas in Colombia beginning in 2004 and subsequent years. As illustrated in figure 6, estimated cocaine production was about 4 percent greater in 2006 than in 2000, with 550 metric tons produced in 2006 compared to 530 metric tons in 2000. However, in September 2008, ONDCP officials noted that cocaine production did not keep pace with rising coca cultivation levels because eradication efforts had degraded coca fields so less cocaine was being produced per hectare of cultivated coca. ONDCP also announced that estimated cocaine production rates in Colombia for 2003 through 2007 had been revised downward based on the results of recent research showing diminished coca field yield rates. On the basis of these revised estimates, ONDCP estimated cocaine production decreased by almost 25 percent from a high of 700 metric tons in 2001 to 535 metric tons in 2007. As illustrated in figure 7, in 2000, the interagency counternarcotics community estimated that 460 metric tons of cocaine was flowing towards the United States from South America. In 2004, the interagency began reporting low and high ranges of estimated flow. Using the midpoints of these ranges, the estimated flow of cocaine to the United States in 2004 was about 500 metric tons; in 2005 it rose to over 625 metric tons; in 2006 and 2007, it was about 620 metric tons. Reductions in Colombia’s estimated cocaine production have been largely offset by increases in cocaine production in Peru and to a lesser extent Bolivia. Although U.S. government estimates suggest that South American cocaine production levels have fluctuated since 2000, production in 2007 was 12 percent higher than in 2000. See appendix III for more detail about the interagency counternarcotics community’s estimates of coca cultivation and cocaine production in Colombia, Bolivia, and Peru. Since 2000, U.S. assistance has enabled the Colombians to achieve significant security advances in two key areas. First, the government has expanded its presence throughout the country, particularly in many areas formerly dominated by illegal armed groups. Second, the government, through its counternarcotics strategy, military and police actions, and other efforts (such as its demobilization and deserter programs) has degraded the finances of illegal armed groups and weakened their operational capabilities. These advances have contributed to an improved security environment as shown by key indicators (see figs. 8 through 10) reported by the government of Colombia. One central tenet of Plan Colombia and follow-on security plans is that Colombian government must reassert and consolidate its control in contested areas dominated or partially controlled by illegal armed groups. According to an analysis provided by the Colombian Ministry of Defense in February 2008, the government was in full or partial control of about 90 percent of the country in 2007 compared with about 70 percent in 2003. U.S. officials we spoke to generally agreed that the government of Colombia had made major progress reasserting its control over large parts of the country and that Colombia’s estimates of enhanced state presence were reasonably accurate. U.S. and Colombian officials and some observers agree that Plan Colombia’s counternarcotics and counterterrorism efforts have degraded the finances and operating capacity of illegal armed groups, including FARC, paramilitaries, ELN, and other drug-trafficking organizations. However, these officials also cautioned that FARC, while severely weakened, remains a threat to Colombia’s national security. FARC’s Capabilities and Finances Have Been Significantly Reduced, but It Remains a National Security Threat According to U.S. and Colombian officials and some reports, FARC’s capabilities and finances have been substantially diminished as a result of U.S. and Colombian counternarcotics efforts and continued pressure from the Colombian military. According to the Drug Enforcement Administration, since 2000, FARC has been Colombia’s principal drug- trafficking organization, accounting for approximately 60 percent of the cocaine exported from Colombia to the United States. According to ONDCP, FARC membership has declined from an estimated high of 17,000 in 2001 to an estimated force of 8,000 or less today. In June 2007, ONDCP reported that Colombia’s antidrug efforts reduced FARC’s overall profits per kilogram of cocaine from a range of $320 to $460 in 2003 to between $195 and $320 in 2005. According to State and embassy officials, and nongovernmental observers, the number of FARC combatants and its capabilities have been dramatically reduced by continuous assaults on its top leadership, the capture or killing of FARC members in conflictive zones, and a large number of desertions. In 2007, the Colombian Ministry of Defense reported that it had captured or killed approximately 4,600 FARC combatants and about 2,500 had demobilized. According to the Colombian Ministry of Defense, as of July 2008, over 1,700 FARC have demobilized this year—over two-thirds of the total for all of 2007. U.S. Military Group officials told us FARC now avoids direct combat with Colombian security forces and is limited to hit and run terrorist attacks. Nonetheless, Defense and Colombian officials caution that FARC remains a national security threat, exercising control over important parts of the country, such as Meta, which serves as a key transport corridor linking many of the coca cultivation areas in the eastern part of the country with the Pacific ports used to transport cocaine out of the country. According to U.S. military group officials, the government of Colombia’s goal is to reduce FARC’s members, finances, and operating capabilities so it no longer poses a national security threat. To achieve this goal, Colombian President Uribe has accelerated the pace of all activities to help ensure this happens by 2010 when his current term ends. However, according to U.S. Military Group officials, FARC will not reach the point where it can no longer pose a significant threat to Colombia’s government until the number of combatants is reduced to less than 4,000. In February 2008, U.S. Military Group officials told us that they estimated that this point could be reached in 18 months, but not without continued U.S. support. AUC Has Demobilized, but Remnants Remain a Threat Beginning in late 2003, AUC entered into a peace accord with the government of Colombia to demobilize and lay down its arms. From 2003 to 2006, AUC paramilitary members reported to demobilization centers around the country. According to USAID officials, approximately 32,000 paramilitary soldiers and support staff entered the demobilization process. However, according to Defense officials, former midlevel officers of AUC have taken advantage of the vacuum created by the demobilization of AUC to form or join regional criminal bands engaged in drug trafficking, which threaten to destabilize the political system and civilian security. According to a May 2007 report by the International Crisis Group, estimates of the total number of individuals involved in these criminal bands range from 3,000 to 9,000, with many of the members former AUC. These include the “Aguilas Negras,” (Black Eagles), which operates in northeastern Colombia along the border with Venezuela, and the “Nueva Generacíon Organizacíon” (New Generation Organization), which operates in the department of Nariño. According to Defense officials, while homicides and kidnappings throughout Colombia have decreased, fighting among illegal armed groups has resulted in an increase in violence and internal displacement in certain regions of the country, such as the southern Colombian department of Nariño. ELN Has Been Weakened and Drug-Trafficking Organizations Have Been Fragmented According to U.S. embassy and Colombian military officials, a number of factors, including Colombian counternarcotics efforts, military pressure, and competition with FARC, have combined to weaken ELN. According to U.S. military group officials, in 2000, ELN was estimated to number approximately 5,000 combatants; it is currently estimated to number between 2,200 and 3,000. According to the Drug Enforcement Administration, in addition to the insurgent and paramilitary groups that engage in drug trafficking, other major drug trafficking groups operate in Colombia. These include the North Valle de Cauca group based in the southwestern Colombian department of Valle de Cauca and the North Coast group based in the Caribbean cities of Cartagena, Barranquilla, and Santa Marta. According to Drug Enforcement Administration officials and reports, Colombian law enforcement successes, including the arrest and extradition of major traffickers, have helped fragment these groups, forcing them to become “niche” organizations, specializing in limited aspects of the drug trade in order to avoid being identified, arrested, and prosecuted. Nevertheless, according to a 2006 Drug Enforcement Administration report, these organizations are increasingly self-sufficient in cocaine base production, have a firm grip on Caribbean and Pacific smuggling routes, and dominate the wholesale cocaine markets in the eastern United States and Europe. State and Defense provided nearly $4.9 billion from fiscal years 2000 to 2008 to the Colombian military and police to support Plan Colombia’s counternarcotics and security objectives (see table 2). U.S. assistance to the Colombian military has focused on developing the capabilities of the Colombian Army’s Aviation Brigade and the creation of an Army Counternarcotics Brigade and mobile units that focus on counternarcotics, infrastructure protection, and counterinsurgency missions. State and Defense also provided extensive support for the Air Force’s Air Bridge Denial Program; and Navy and Marine interdiction efforts. U.S. support for the National Police has focused on its Aerial Eradication Program and Air Service. Other U.S. assistance supported the creation of mobile squadrons of rural police (referred to as “Carabineros”), which have helped establish (1) a police presence in 169 Colombian municipalities that had no police presence in 2002, and (2) specialized interdiction programs that attack cocaine labs and narcotrafficking in the ports. This support has led to a range of accomplishments since 2000 including increasing the cost of doing business for both coca farmers and drug traffickers by eradicating illicit drug crops and seizing finished product; destroying hydrochloride laboratories; demobilizing, capturing, and killing thousands of combatants; and the capture or killing of several high-profile leaders of FARC and other illegal armed groups. Program officials noted, however, that a number of challenges have diminished the effect U.S. assistance has had on reducing the flow of cocaine to the United States, including the countermeasures taken by coca farmers to mitigate the effect of U.S. and Colombian eradication programs. Since fiscal year 2000, State and Defense have provided over $844 million to help expand and maintain an Army Aviation Brigade that has seen almost a threefold increase in the number of aircraft it manages and a near doubling in its total personnel since 2000. Increased air mobility has been described by the Colombian Ministry of Defense as a key outcome of U.S. support for Plan Colombia. Air mobility is needed to conduct spray operations and move Army Counternarcotics Brigade personnel to eradication sites to provide needed security. Air mobility is also needed to transport different Colombian army units waging security operations against FARC and other illegal armed groups where rapid deployment is essential for delivering combat troops to the point of attack. The brigade consists of three fleets of helicopters. The first, referred to as the Plan Colombia Helicopter Program or PCHP, consists of 52 U.S. aircraft—17 UH-1Ns, 22 UH-IIs, and 13 UH-60L Blackhawks—that State provided to the Colombians under a no-cost lease. The second fleet, commonly referred to as the FMS fleet, consists of 20 UH-60Ls, which Colombia acquired through the Foreign Military Sales (FMS) program. The third fleet consists primarily of Russian and U.S. aircraft leased by the Army Aviation Brigade, along with aircraft that have been nationalized. State, with assistance from Defense, has provided the PCHP fleet with the essential support components needed to manage a modern combat aviation service, including infrastructure and maintenance support; contract pilots and mechanics; assistance to train pilots and mechanics; flight planning, safety, and quality standards and procedures; and a logistics system. Defense provides a Technical Assistance Field Team to support the brigade’s FMS fleet. The team is contracted to provide oversight of FMS fleet maintenance activities and to help train brigade mechanics working on these helicopters. Defense also is providing the Ministry of Defense with a logistics system and a limited aviation depot to enable the Colombians to perform certain depot-level repairs on their helicopters. Appendix II describes these support services in more detail. Figure 11 illustrates some examples. According to U.S. and Colombian officials, a key challenge facing the brigade is to train and retain enough pilots and mechanics to manage the brigade without continued U.S. support—a challenge we have noted previously. In June 2003, we reported that the Colombian Army could not maintain the PCHP helicopters because it did not have sufficient numbers of qualified pilots and mechanics. At that time, U.S. officials expected they would have enough trained entry level pilots by December 2004. They also told us that 54 maintenance personnel required basic training, but noted that it would be 3 to 5 years before these mechanics would be qualified to repair helicopters. We found that the Army Aviation Brigade is still understaffed. According to State, as of June 2008, a total of 43 contract pilots and 87 contract mechanics were needed to operate the PCHP program. U.S. officials expect that almost all of these contract personnel will be replaced with Colombian Army personnel by 2012, at which time U.S. program officials said all program support to the Army Aviation Brigade would consist of technical support. According to the Commander of the Army Aviation Brigade, however, the Colombians are buying 15 additional UH-60 Blackhawks through the FMS system for delivery starting in October 2008 and, in July 2008, the United States loaned 18 UH-1Ns from PCHP’s inventory to Colombia. These additional helicopters will strain U.S. efforts to help the Colombians ensure they have enough trained pilots and mechanics to meet their needs. Military Group and NAS officials told us that current U.S. funding and training plans can accommodate Colombia’s planned FMS purchase and the 18 loaned UH-1Ns. These officials cautioned, however, that any additional Colombian aircraft purchases will have a significant impact on future funding and training requirements. While the Colombian Army has not had difficulty retaining pilots, the lack of a dedicated career path that provides an incentive for pilots to remain with the brigade could adversely affect retention. According to a U.S. Embassy Bogotá report, the lack of a warrant officer program means that, to earn promotion, Army Aviation Brigade officers must command ground troops, taking them away from being helicopter pilots. This lack of a dedicated career path may be a problem as more junior staff progress in their careers. According to the Commander of the Army Aviation Brigade, the Colombian Army has approved plans to establish a career path for military and police aviators by creating a warrant officer program. However, the Ministry of Defense and the Colombian legislature must approve this before the program can begin. Since fiscal year 2000, State and Defense have provided over $104 million to advise, train, and equip Colombian ground forces, which grew by almost 50 percent during this period. This assistance supported the creation of an Army Counternarcotics Brigade, Army mobile units, and a Joint Special Operations Command. Each pursues various counternarcotics and counterinsurgency missions under a national joint command structure. The Army’s Counternarcotics Brigade was originally established in 1999 to plan and conduct interdiction operations against drug traffickers in southern Colombia. U.S. and Colombian officials credit the brigade with providing the security needed to conduct aerial and manual eradication operations, along with drug and precursor seizures and the destruction of base and hydrochloride laboratories. The brigade’s initial focus was on the departments of Putumayo and Caquetá where, at the time, much of Colombia’s coca cultivation was located. Subsequently, the brigade was designated a national asset capable of operating anywhere in Colombia. The brigade’s mission was also extended to include counterinsurgency operations in line with expanded program authority passed by Congress in 2002 that allowed U.S. assistance to be used for both counternarcotics and counterterrorism purposes. Defense provided the brigade with training, equipment, and infrastructure support including the construction of facilities at Tres Esquinas and Larandia, while State assistance provided the brigade with weapons, ammunition, and training. The brigade carries out ground interdiction operations and provides ground security for the National Police’s aerial and manual eradication efforts. The brigade is supported by the Army Aviation Brigade, which provides air mobility. According to State and U.S. military group officials, the brigade now provides its own training and most of its equipment. Beginning in fiscal year 2004, State reduced the amount of funding for the brigade from approximately $5 million to $2.2 million in fiscal year 2007. It is scheduled to remain at this level in fiscal year 2008. Defense provided support has helped equip mobile Army brigades and joint special forces units which, according to Defense officials, seek to establish “irreversible” security gains against FARC and other illegal armed groups. In particular, this assistance (1) enabled the Army to form mobile brigades for counterinsurgency efforts, such as Joint Task Force- Omega in central Colombia, and (2) facilitated the establishment of a Joint Special Forces Command made up of a commando unit, an urban hostage rescue unit, and a Colombian Marine special forces unit. According to Defense officials, U.S. assistance to the mobile brigades consisted primarily of intelligence and logistics support, training, weapons, ammunition, vehicles, and infrastructure support including a fortified base in La Macarena, which is the home base for Joint Task Force-Omega’s mobile units. This assistance has helped the Colombian Army conduct mobile operations throughout Colombia, a capacity that Defense officials said generally did not exist at the outset of Plan Colombia. According to a senior U.S. Military Group official, the mobile brigades’ effectiveness can be seen in the number of combatants from illegal armed groups captured and killed or who have surrendered. For example, Joint Task Force-Omega documentation provided by the Colombians show that, as of February 2008, the task force had captured over 1,000 combatants, killed almost 100, and persuaded about 400 to surrender. The United States continues to provide support for the Army’s mobile brigades, but U.S. officials expect this support to be reduced as the brigades become increasingly self-sufficient. U.S. support has helped the Colombian military establish a Joint Special Forces Command that also operates under the direction of the General Command of the Armed Forces. The support consisted of training, weapons, ammunition, and infrastructure support, including for the command’s principal compound near Bogotá. According to Defense officials, the command includes approximately 2,000 soldiers from five units made up of Colombian Army, Air Force, and Marine components. It is tasked with pursuing high-value targets and rescuing hostages in urban and rural environments. U.S. officials described this command as similar to the U.S. Special Operations Command and said that, prior to 2004, the Colombian military did not have the capability to conduct joint special forces operations. According to U.S. officials, the command has been involved in a number of high-profile operations, including the recent rescue of 15 hostages that included three U.S. citizens. In fiscal years 2000-2008, Congress provided over $115 million to help Colombia implement phase one of its infrastructure security strategy, designed to protect the first 110 miles of the nearly 500 mile-long Caño Limón-Coveñas oil pipeline from terrorist attacks. In prior years, insurgent attacks on the pipeline resulted in major economic losses for both the Colombian government and oil companies operating in the country. For instance, in 2001, the pipeline was attacked 170 times and forced to shut down for over 200 days, resulting in approximately $500 million in lost revenues, as well as considerable environmental damage. According to State, there was only one attack made on the entire length of the pipeline in 2007. U.S. support provided for both an aviation component and a ground combat support element and included two UH-60 Blackhawk helicopters, eight UH-II helicopters, and related logistics support and ground facilities. Nearly $30 million was used for U.S. Special Forces training and equipment provided to about 1,600 Colombian Army soldiers assigned to protect this portion of the pipeline. In December 2007, the United States transferred operating and funding responsibility for the infrastructure security strategy to Colombia— including nine helicopters. Beginning in fiscal year 2003, State has provided over $62 million in assistance to enable the Colombian Air Force to implement the Air Bridge Denial (ABD) program, which is designed to improve the Colombian government’s capacity to stop drug trafficking in Colombian airspace by identifying, tracking, and forcing suspicious aircraft to land so that law enforcement authorities can take control of the aircraft, arrest suspects, and seize drugs. The program was expanded in 2007 to include surveillance of Colombia’s coastal waters to strengthen the Colombian government’s capacity to address the emerging threat posed by semisubmersible vessels. To support the program, State and Defense have provided the Colombian Air Force with seven surveillance aircraft, which monitor Colombian airspace for suspicious traffic, infrastructure support at four ABD bases located across Colombia, contract aviation maintenance support, training, ground and air safety monitors, and funding for spare parts and fuel. The program also utilizes a network of U.S. detection resources including five in-country radars, over-the-horizon radars located outside Colombia, and airborne radar systems. In June 2007, the United States began nationalizing the ABD program, including transferring the title of surveillance aircraft and responsibility for operating and maintaining the five radars located in Colombia. According to NAS officials, the United States is training Colombian Air Force ground and air safety monitors and maintenance personnel and expects to nationalize the program by 2010, with only limited U.S. funding in subsequent years. According to NAS officials, suspicious aircraft tracks dropped from 637 in 2003 to 84 in 2007. In 2007, the Colombian Air Force forced three suspected drug-trafficking aircraft to land and each aircraft was seized; however, according to a senior NAS official, the crews escaped, and no cocaine was found. In the same year, the ABD program was expanded to include a maritime patrol mission. While conducting a maritime patrol, ABD aircraft assisted in the sinking of two self-propelled semisubmersibles, which resulted in the arrest of seven individuals and the seizure or destruction of approximately 11 metric tons of cocaine. In our September 2005 report, we noted that the stated purpose of the program (the seizure of aircraft, personnel, and drugs) was rarely achieved, though the program did succeed in reducing the number of suspicious flights over Colombia—a valuable program outcome, according to U.S. and Colombian officials. Since fiscal year 2000, State and Defense provided over $89 million to help sustain and expand Colombian Navy and Marine interdiction efforts. According to Defense, from January to June 2007, an estimated 70 percent of Colombia’s cocaine was smuggled out of the country using go-fast vessels, fishing boats, and other forms of maritime transport. State and Defense support for the Colombian Navy is designed to help improve their capacity to stop drug traffickers from using Colombia’s Caribbean and Pacific coasts to conduct drug-trafficking activities. State and Defense support for the Colombian Marines is designed to help gain control of Colombia’s network of navigable rivers, which traffickers use to transport precursor chemicals and finished products. According to Colombian Ministry of Defense officials, the number of metric tons of cocaine seized by the Navy and Marines represented over half of all cocaine seized by Colombia in 2007. State and Defense assistance to the Colombian Navy provided for infrastructure development (such as new storage refueling equipment for the Navy station in Tumaco), the transfer of two vessels to Colombia, eight “Midnight Express” interceptor boats, two Cessna Grand Caravan transport aircraft, weapons, fuel, communications equipment, and training. State assistance also helped the Colombian Navy establish a special intelligence unit in the northern city of Cartagena to collect and distribute time-sensitive intelligence on suspect vessels in the Caribbean. In 2007, the unit coordinated 35 interdiction operations, which resulted in the arrests of 40 traffickers, the seizure of over 9 metric tons of cocaine, and the seizure of 21 trafficker vessels including one semisubmersible vessel. The U.S. Embassy Bogotá credits this unit for over 95 percent of all Colombian Navy seizures in the Caribbean, forcing traffickers to rely more on departure sites along the Pacific Coast and areas near Venezuela and Panama. The Colombian Navy faces certain challenges. First, it generally lacks the resources needed to provide comprehensive coverage over Colombia’s Pacific coastline. For example, according to Colombian Navy officials, the Navy has only three stations to cover all of Colombia’s Pacific coastline. Second, according to U.S. Embassy Bogotá officials, these services lack adequate intelligence information to guide interdiction efforts along the Pacific coast. According to embassy officials, the United States is working with the Colombians to expand intelligence gathering and dissemination efforts to the Pacific coast, in part by providing support to expand the Navy’s intelligence unit in Cartagena to cover this area. Third, traffickers have increasingly diversified their routes and methods, including using semisubmersibles to avoid detection. For the Colombian Marines, State and Defense provided support for infrastructure development (such as docks and hangars), 95 patrol boats, weapons, ammunition, fuel, communications equipment, night vision goggles, and engines. Colombia’s rivers serve as a vital transport network and are used to transport the precursor chemicals used to make cocaine and heroin, as well as to deliver the final product to ports on Colombia’s Caribbean and Pacific coasts. According to State, up to 40 percent of the cocaine transported in Colombia moves through the complex river network in Colombia’s south-central region to the southwestern coastal shore. According to U.S. Southern Command officials, the key challenge facing the riverine program is a general lack of resources given the scope of the problem. The Colombian marines maintain a permanent presence on only about one-third of Colombia’s nearly 8,000 miles of navigable rivers. U.S. embassy planning documents have set a goal of helping the Colombian Marines achieve a coverage rate of at least 60 percent by 2010. Since the early 1990s, State/INL has supported the Colombian National Police Aerial Eradication Program, which is designed to spray coca and opium poppy. Since fiscal year 2000, State has provided over $458 million to support the program, which increased its spray operations about threefold. The Aerial Eradication Program consists of U.S.-owned spray aircraft and helicopters, as well as contractor support to help fly, maintain, and operate these assets at forward operating locations throughout Colombia. As of August 2008, these aircraft included 13 armored AT-802 spray aircraft; 13 UH-1N helicopters used as gunships or search and rescue aircraft; four C-27 transport aircraft used to ferry supplies and personnel to and from the various spray bases; and two reconnaissance aircraft used to find and identify coca cultivation, and plan and verify the results of spray missions. A typical spray mission consists of four spray aircraft supported by helicopter gunships to protect the spray aircraft along with a search and rescue helicopter to rescue downed pilots and crew. In addition, ground security is provided as needed by the Army Counternarcotics Brigade. U.S. funded counternarcotics efforts, which focused on aerial spraying, did not achieve Plan Colombia’s overarching goal of reducing the cultivation, production, and distribution of cocaine by 50 percent, in part because coca farmers responded with a series of effective countermeasures. These countermeasures included (1) pruning coca plants after spraying; (2) re- planting with younger coca plants or plant grafts; (3) decreasing the size of coca plots; (4) interspersing coca with legitimate crops to avoid detection; (5) moving coca cultivation to areas of the country off-limits to spray aircraft, such as the national parks and a 10 kilometer area along Colombia’s border with Ecuador; and (6) moving coca crops to more remote parts of the country—a development that has created a “dispersal effect” (see figures 12 and 13). While these measures allowed coca farmers to continue cultivation, they have increased the coca farmers and traffickers’ cost of doing business. NAS officials said Colombia and the United States have taken several actions to address this issue. For instance, the government of Colombia initiated a program in 2004 to manually eradicate coca. Since 2004, the amount of coca manually eradicated increased from about 11,000 hectares to about 66,000 hectares in 2007. According to NAS officials, in response to congressional budget cuts in fiscal year 2008, the embassy reduced its aerial eradication goal to 130,000, compared with 160,000 in 2007. This reduction may be offset by a planned increase in manual eradication efforts from 66,000 hectares in 2007 to 100,000 hectares in 2008. However, manual eradication efforts require significant personnel, security, and transportation, including air mobility resources. Through the end of May 2008, Colombia reported that about 28,000 hectares had been manually eradicated. In addition, manual eradication often takes place in conflictive areas against a backdrop of violence, which makes full implementation of this strategy even more problematic. According to State, despite protection measures taken, manual eradicators were attacked numerous times—by sniper fire, minefields, and improvised explosive devices—and through August 2008, 23 eradicators were killed, bringing to 118 the total number of eradicators killed since 2005. National Police Air Service Since fiscal year 2000, State provided over $463 million to help expand and sustain the Police Air Service (known by its Spanish acronym, ARAVI). Similar to the role played by the Army Aviation Brigade, ARAVI provides air mobility support for a range of National Police functions including aerial and manual eradication efforts that require gunship and search and rescue support for the spray planes, as well as airlift support for the manual eradication teams and associated security personnel. In addition, ARAVI provides airlift for the National Police’s commandos unit, known as Junglas. According to NAS officials, ARAVI consists of 61 NAS-supported aircraft and 30 National Police-supported aircraft. Key program support elements include hanger and taxiway construction upgrades to the Air Service’s operating base outside of Bogotá; the provision of contract mechanics; training; and funding for spare parts, fuel, and other expenses. Appendix II describes these support services in more detail. According to NAS officials, in addition to enabling ARAVI to better manage its aviation assets, ARAVI has become self-sufficient in some areas. For instance, it provides its own entry-level pilot and mechanic training and can plan and execute its own operations. However, U.S. and contractor officials said that ARAVI still continues to suffer from major limitations. According to NAS and contractor officials, ARAVI: Receives approximately 70 percent of its total maintenance and operating funding from State. According to Embassy Bogotá officials, the Colombian Ministry of Defense often underfunds the service on the assumption that State will make up the difference. Lacks some specialized maintenance personnel. For instance, according to State-funded U.S. contractor personnel, in February 2008, the service only had about half of the required number of quality control inspectors. To make up the shortfall, the service relies on quality control inspectors provided by the contractor. Has high absentee rates. This is a problem that we have reported on in the past. For example, according to data supplied by the contractor, during the second week of February 2008, only 25 percent of the technicians and 40 percent of the assigned inspectors were present to service ARAVI’s UH- 60s. Since fiscal year 2000, State provided over $153 million to strengthen the National Police’s efforts to interdict illicit drug trafficking. According to State, in fiscal year 2007, it focused most of its assistance on equipping and training the Junglas, but also provided assistance for maritime, airport, and road interdiction programs. The Junglas consist of 500 specially selected police divided into three companies based at Bogotá, Santa Marta, and Tulua, as well as a 60-man instructor group based at the National Police rural training center. Described by U.S. Embassy Bogotá officials as being widely considered as one of the best trained and equipped commando units in Latin America, they are often the unit of choice in operations to destroy drug production laboratories and other narcoterrorist high value targets, many of which are located in remote, hard-to-find locations. State support for the Junglas consisted of specialized equipment typically provided to U.S. Army Special Forces teams, such as M-4 carbines, mortars, helmets, and vests, as well as specialized training provided in Colombia and the United States. According to State, in 2006 and 2007, the Junglas were responsible for more than half of all the hydrochloric and coca base laboratories destroyed by the National Police, and seized over 64 metric tons of cocaine during the same period. State also supported the National Police’s maritime and airport security programs to strengthen the National Police’s capability to protect against illicit cargo—primarily narcotics—smuggled through Colombia’s principal seaports and airports. State assistance included funding for training, technical assistance, and limited logistical support (including K-9 support) for port security units at eight Colombian seaports and six airports. According to State, units based at Colombia’s principal seaports and airports seized more than 13 metric tons of illicit drugs in 2006; a figure that rose to over 22 metric tons in 2007. Since fiscal year 2000, the United States provided over $92 million to help the Colombians establish Carabineros squadrons. The Carabineros were initially created to provide an immediate State presence in conflictive areas reclaimed by the Colombian military. According to State, the Colombians currently have 68 Carabineros squadrons, each staffed with 120 personnel. The squadrons provide temporary support as other government services and a permanent police presence are established in reclaimed areas. State support consisted of training, weapons, ammunition, night vision goggles, metal detectors, radios, vehicles, and other items including some limited support for permanent police stations The Carabineros supported President Uribe’s goal of re-establishing a State presence in each of the country’s 1,099 municipalities (169 municipalities had no police presence prior to 2002). Though a July 2007 U.S. Embassy Bogotá report noted there are now police stations in every municipality throughout Colombia, these often consist of a small number of police who are responsible for areas covering hundreds of square miles of territory. Despite these limitations, State noted that in contrast to earlier years, no police stations were overrun in 2007. NAS officials attributed this development to improved based defense training, defensive upgrades, and the increased police presence that Carabinero squadrons provide in rural areas. Since fiscal year 2000, the United States has provided nearly $1.3 billion for nonmilitary assistance to Colombia, focusing on the promotion of (1) economic and social progress and (2) the rule of law, including judicial reform. To support social and economic progress, the largest share of U.S. nonmilitary assistance has gone toward alternative development, which has been a key element of U.S. counternarcotics assistance and has bettered the lives of hundreds of thousands of Colombians. Other social programs have assisted thousands of internally displaced persons (IDPs) and more than 30,000 former combatants. Assistance for the rule of law and judicial reform have expanded access to the democratic process for Colombian citizens, including the consolidation of state authority and the established government institutions and public services in many areas reclaimed from illegal armed groups. (See table 3.) Nevertheless, these programs face several limitations and challenges. For example, the geographic areas where alternative development programs operate are limited by security concerns, and programs have not demonstrated a clear link to reductions in illicit drug cultivation and production. In addition, many displaced persons may not have access to IDP assistance, the reintegration of former combatants into society and reparations to their victims has been slow, and funding to continue these programs is a concern. Finally, Colombia’s justice system has limited capacity to address the magnitude of criminal activity in Colombia. USAID provided more than $500 million in assistance between fiscal years 2000 and 2008 to implement alternative development projects, which are a key component of the U.S. counternarcotics strategy in Colombia. USAID’s goal for alternative development focuses on reducing the production of illicit narcotics by creating sustainable projects that can function without additional U.S. support after the start-up phase is implemented. In recent years, USAID modified its alternative development strategy to emphasize sustainability. With regard to its strategic goal, alternative development projects face two key challenges—USAID currently has almost no alternative development projects in areas where the majority of coca is grown, and a government of Colombia policy prohibits alternative development assistance projects in communities where any illicit crops are being cultivated. USAID’s original alternative development strategy in 2000 focused on encouraging farmers to manually eradicate illicit crops and providing assistance to those who did through licit, short-term, income-producing opportunities. These efforts were concentrated in the departments of Caquetá and Putumayo, where, at the time, most of Colombia’s coca was cultivated and where U.S. eradication efforts were focused. However, USAID and its implementing partners found it difficult to implement projects in the largely undeveloped south where the Colombian government exercised minimal control. As a result, in February 2002, USAID revised its approach to support long-term, income-generating activities, focus more attention and resources outside southern Colombia, and encourage private-sector participation. In 2004, we reported that the revised alternative development program had made progress but was limited in scope and may not be sustainable. USAID revised its alternative development strategy beginning in 2006 to focus on specific geographic corridors, improve coordination, and increase the likelihood of achieving sustainable projects. The geographic corridors are in six regions in the western part of Colombia where the government has greater control and markets and transportation routes are more developed. However, the corridors are not in primary coca cultivation areas. USAID officials told us that the alternative development corridors are intended to act as a magnet, providing legal economic opportunities to attract individuals from regions that cultivate illicit crops, while also preventing people within the corridors from cultivating coca. USAID’s current strategy is carried out through two major projects—Areas for Municipal Level Alternative Development (ADAM) and More Investment for Sustainable Alternative Development (MIDAS). ADAM works with individuals, communities, and the private sector to develop licit crops with long-term income potential, such as cacao and specialty coffee. ADAM also supports social infrastructure activities such as schools and water treatment plants, providing training, technical assistance, and financing of community projects. It emphasizes engagement with communities and individual beneficiaries to get their support and focuses on smaller scale agricultural development with long-term earning potential. For example, under ADAM, USAID provided infrastructure improvements to a facility that processes blackberries in order to increase capacity and continues to provide technical assistance to farmers who grow blackberries for the facility. MIDAS promotes private-sector led business initiatives and works with the Colombian government to make economic and policy reforms intended to maximize employment and income growth. USAID encourages public and private-sector investment in activities that raise rural incomes and create jobs, and it provides training and technical assistance to the Colombian government at the local and national levels to expand financial services into rural areas, build capacity of municipal governments, and encourage the Colombian government’s investment in programs. For example, MIDAS worked with the Colombian government to lower microfinance fees and provided technical assistance to private lenders, which led to increased availability of small loans in rural areas that can be used to start up small- and medium-sized businesses. Overall, alternative development beneficiaries we talked with told us their quality of life has improved because they faced less intimidation by FARC and had better access to schools and social services, even though they generally earned less money compared with cultivating and trafficking in illicit drugs. One challenge facing alternative development programs is their limited geographic scope. Alternative development programs are largely focused in economic corridors in the western part of Colombia, where, according to USAID officials, a greater potential exists for success due to access to markets, existing infrastructure, and state presence and security. Currently, USAID has almost no alternative development projects in eastern Colombia, where the majority of coca is grown. (See fig. 14.) While the majority of the Colombian population lives within the USAID economic corridors, the lack of programs in eastern Colombia nonetheless poses a challenge for linking alternative development to reducing the production of illicit drugs. The USAID Mission Director told us that the mission intends to expand the geographic scope of alternative development programs as the government of Colombia gains control over conflictive areas. However, the lack of transportation infrastructure in most coca growing areas limits the chances of program success and future expansion. USAID and other U.S. Embassy Bogotá officials emphasized that alternative development programs have benefited from security gains made possible through the Colombian military’s enhanced air mobility, but large areas of Colombia are still not secure. According to USAID officials, another challenge is the government of Colombia’s “Zero Illicit” policy, which prohibits alternative development assistance projects in communities where any illicit crops are being cultivated. Acción Social officials said the policy is intended to foster a culture of lawfulness and encourage communities to exert peer pressure on families growing illicit crops so that the community at large may become eligible for assistance. However, USAID officials expressed concern that the policy limits their ability to operate in areas where coca is grown. The policy also complicates USAID’s current strategy of working in conflictive areas like Meta, where coca is cultivated in high concentrations. One nongovernmental organization official told us the policy is a major problem because if one farmer grows coca in a community otherwise fully engaged in and committed to growing licit crops, then all aid is supposed to be suspended to that community. However, USAID officials told us programs have only been suspended a few times due to this requirement. USAID collects data on 15 indicators that measure progress on alternative development; however, none of these indicators measures progress toward USAID’s goal of reducing illicit narcotics production through the creation of sustainable economic projects. Rather, USAID collects data on program indicators such as the number of families benefited and hectares of legal crops planted. While this information helps USAID track the progress of projects, it does not help with assessing USAID’s progress in reducing illicit crop production or its ability to create sustainable projects. In 2004, USAID officials said a new strategy was being developed that would allow for the creation of new performance measures. But, USAID did not develop indicators that are useful in determining whether alternative development reduces drug production. For example, while USAID intends for coca farmers in eastern Colombia to move to areas with alternative development projects, USAID does not track the number of beneficiaries who moved out of areas prone to coca cultivation. In addition, while the current alternative development strategy is designed to produce sustainable results, USAID does not collect tracking data on beneficiaries who have received assistance to determine whether they remain in licit productive activities or which projects have resulted in sustainable businesses without government subsidies. The contractor responsible for implementing USAID’s alternative development programs told us USAID does not monitor the necessary indicators and, therefore, cannot determine the extent to which projects are contributing to reducing coca cultivation or increasing stability. Since fiscal year 2000, State’s Population Refugee and Migration (PRM) bureau reports it has provided $88 million in short-term, humanitarian assistance to support IDPs and other vulnerable groups (such as Afro- Colombians and indigenous peoples). PRM provides humanitarian assistance for up to 3 months after a person is displaced, providing emergency supplies as well as technical assistance and guidance to the government of Colombia and local humanitarian groups to build their capacity to serve IDPs. In addition, from fiscal years 2000 to 2007, USAID has provided over $200 million for longer term economic and social assistance to support IDPs and vulnerable groups. USAID assistance has focused on housing needs and generating employment through job training and business development and has also included institutional strengthening of Colombian government entities and nongovernmental organizations through technical assistance and training in areas such as delivery of housing improvements and subsidies and the provision of health care. According to USAID, more than 3 million people have benefited from this assistance. However, according to State and USAID officials, the number of newly displaced persons in Colombia continues to rise, and it can be difficult to register as an IDP. According to the United Nations High Commissioner for Refugees, Colombia has up to 3 million IDPs—the most of any country in the world. Acción Social reports it has registered over 2.5 million IDPs. But State PRM officials report that international and non-governmental organizations estimate that between 25 and 40 percent of IDPs are not registered. Acción Social officials disagreed and estimated under- registration to be 10 percent. In any case, Acción Social officials said that the agency’s budget is not sufficient to provide assistance to all the IDPs registered. In 2003, the Colombian government and AUC entered into a peace accord to demobilize. State data indicate the United States has provided over $44 million for USAID programs for monitoring and processing demobilized AUC combatants, the verification mission of the Organization of the American States, reparations and reconciliation for victims of paramilitary violence, and the reintegration of adult and child ex-combatants into Colombian society. USAID also supports the National Commission on Reparation and Reconciliation, which was created to deliver reparations and assistance to victims. From 2003 to 2006, according to USAID, approximately 32,000 AUC members demobilized. Most were offered pardons for the crime of raising arms against the Colombian state and were enrolled in a government of Colombia reintegration program. AUC leaders and soldiers who had been charged, arrested, or convicted of any major crime against humanity (such as murder and kidnapping) were offered alternative sentencing in exchange for providing details of crimes in depositions to Colombian officials. USAID assisted the government of Colombia in the creation of 37 service centers, mostly in large cities, at which ex-combatants could register for health services, job training, and education and career opportunities, and has assisted the service centers in tracking the demobilized soldiers’ participation in the reintegration process. USAID also assisted with AUC identity verification, criminal record checks, initial legal processing, documentation of biometric data (such as pictures, thumbprints, and DNA samples), and issuance of a registration card. U.S. and Colombian officials report that the AUC demobilization has enhanced security through reductions in murders, displacements, and human rights abuses. Depositions have uncovered thousands of crimes, hundreds of former combatants are serving jail sentences for their crimes, and victims of paramilitary violence are beginning to see resolution to crimes committed against them and their families. In April 2008, the government of Colombia began allowing some FARC deserters to receive benefits similar to those received by AUC. FARC ex- combatants who cooperate with Colombian authorities may receive pardons; enter a reintegration program; and have access to training, medical benefits, and counseling. Despite the progress made, Colombian and USAID officials told us the reintegration of demobilized combatants has been slow, and many may have returned to a life of crime. The reintegration program is the primary system to prevent the demobilized from joining the ranks of criminal gangs. However, USAID officials estimate that approximately 6,000 of the demobilized have not accessed the service centers. Moreover, Colombian officials told us many businesses have been reluctant to hire the ex- combatants, and the majority has not found employment in the formal economy. Criminal gangs recruit heavily from the ranks of the demobilized, and Colombian officials estimate about 10 percent (or 3,000) have joined these illegal groups. In addition, a senior Colombian official reported that reparations to the victims of paramilitary violence have been slow. Ex-combatants have not been forthcoming about illegally obtained assets—which can be used to pay for reparations—and often hide them under the names of family or acquaintances. Victims of paramilitary violence have criticized the reparations process as slow and expressed resentment of the benefits paid to demobilized paramilitaries under the reintegration program. Initially, victims could not receive reparations unless there was a conviction, which required a lengthy judicial process. But, in April 2008, Colombia began to provide compensation to over 120,000 paramilitary victims without the requirement for a conviction. Since fiscal year 2000, State data indicates that USAID has provided over $150 million to support the rule of law in Colombia through human rights protection, the creation of conflict resolution centers, and training of public defenders, among other activities. USAID has provided more than 4,500 human rights workers protection assistance such as communications equipment and bullet proof vests, as well as technical assistance, training, equipment, and funding to programs that protect union leaders, journalists, mayors, and leaders of civil society organizations. USAID also created and provides assistance to Colombia’s Early Warning System, to alert authorities of violent acts committed by illegally armed groups. According to USAID, since its inception in 2001, the Early Warning System has prevented over 200 situations that may have caused massacres or forced displacements. By the end of 2007, USAID achieved its goal of creating 45 justice sector institutions known as Justice Houses, and has trained over 2,000 conciliators who help to resolve cases at Justice Houses; these conciliators have handled over 7 million cases, relieving pressure on the Colombian court system. USAID has also refurbished or constructed 45 court rooms to ensure they are adequate for oral hearings under the new criminal justice system, and is developing 16 “virtual” court rooms, by which the defendant, judges, prosecutors, and public defenders can all participate via closed circuit television. USAID has trained 1,600 public defenders since 2003, including training in a new criminal procedure code, and the Colombian government now pays all of the defenders’ salaries. However, these programs face challenges in receiving commitments from the Colombian government and addressing shortfalls in equal access to justice for all Colombians. USAID officials expressed concern about the Colombian government’s ability to fund the Early Warning System— USAID currently pays 95 to 98 percent of the salaries. According to USAID officials, a letter of understanding between USAID and the Colombian government calls for Colombia to pay 100 percent in 2011. In addition, the 45 Justice Houses in Colombia are located in large cities primarily in the western half of the country, with almost no Justice Houses in the less populated eastern half of the country where high rates of violence and crime occur. However, USAID plans to assist the Colombian government in strengthening state presence in rural areas of Colombia through the construction of 10 new regional Justice Houses in rural, post conflict areas. Since the beginning of 2007, USAID and Defense have committed $28.5 million for two programs that support Colombia’s “Clear, Hold and Consolidate” strategy: (1) the Regional Governance Consolidation Program and (2) the Initial Governance Response Program. Both programs directly support the Coordination Center for Integrated Government Action (CCAI), which was created in 2004 to integrate several military, police, and civil agencies and to coordinate national-level efforts to reestablish governance in areas that previously had little or no government presence. USAID works to increase the operational capacity of CCAI by providing direct planning and strategic assistance; for example, USAID hired a consulting firm to develop a detailed operational plan for CCAI’s activities in Meta. USAID also assists CCAI with projects designed to reinforce stability in areas formerly controlled by insurgents and quickly build trust between the government and local communities in Meta—such as La Macarena. USAID officials said Colombia’s consolidation strategy may serve as a model for future program activities throughout Colombia; however, CCAI faces challenges that could limit its success. CCAI does not have its own budget and relies on support, funding, and personnel from other agencies within the Colombian government. While Defense officials estimate that CCAI spent over $100 million from Colombian government agencies in 2007, it often faced delays in receiving the funding. Also, security remains a primary concern for CCAI because it operates in areas where illegal armed groups are present. For example, CCAI representatives in La Macarena do not travel outside of a 5-kilometer radius of the city center due to security concerns. Justice has provided over $114 million in fiscal years 2000 through 2007 for programs intended to improve the rule of law in Colombia, primarily for the transition to a new criminal justice system and training and related assistance for investigating human rights crimes and crimes confessed to by former combatants during the AUC demobilization. About $42 million was for training, technical assistance, and equipment to support the implementation of a new accusatory criminal justice system. In 2004, Colombia enacted a new Criminal Procedure Code, which began the implementation of an oral accusatory system involving the presentation and confrontation of evidence at oral public trials, similar to the system used in the United States. Justice training has included simulated crime scenes and court proceedings to develop the necessary legal and practical understanding of the oral accusatory system. Justice reports it has trained over 40,000 judges, prosecutors, police investigators, and forensic experts in preparation for their new roles. According to Justice, the new accusatory system has improved the resolution of criminal cases in Colombia. Under the old system, trials took an average of 5 years; this has been reduced to 1 year under the current system. According to Justice, the new system has led to an increase in the conviction rate of 60 to 80 percent, with Colombia reporting 48,000 convictions in the first 2 years of implementation. Furthermore, the number of complainants and witnesses increased since implementation, which suggests a greater public confidence in the new system. Justice also provided about $10 million for fiscal years 2005 to 2007 to both the Fiscalia’s Justice and Peace Unit and Human Rights Unit to support the AUC demobilization under the Justice and Peace Process. The Justice and Peace Unit oversees the process through which demobilized paramilitaries give depositions that detail their knowledge of the paramilitary structure and of crimes such as mass killings or human rights abuses. Justice has provided more than $2 million in equipment, including video recording technology, to aid in the processing of approximately 5,000 depositions at the Justice and Peace offices in Bogotá, Medellin, and Barranquilla. The unit also collects and processes complaints filed by victims of paramilitary violence. The Human Rights Unit is tasked with the investigation and prosecution of human rights violations, such as attacks on union leaders, forced disappearances, and mass graves, as well as the investigation and prosecution of demobilized paramilitary members suspected of human rights violations. According to Colombian officials, depositions have led to the confession of over 1,400 crimes that the government had no prior knowledge of, as well as the locations of an estimated 10,000 murder victims in 3,500 grave sites. Over 1,200 victims’ remains have been recovered through exhumations, and the human identification labs continue to work on the identification of the remains using DNA testing. According to Justice, the depositions of 25 paramilitary leaders have been initiated and, in May 2008, 15 leaders were extradited to the United States. The Justice and Peace Unit has received over 130,000 victims’ claims. Justice also provided about $10 million from fiscal years 2005 to 2007 to increase the capacity for the Colombian government to investigate criminal cases. Justice provided vehicles and funds for investigators to travel to crime scenes and collect evidence; specialized forensic training and equipment for Colombian exhumation teams that unearth victims’ remains based on information uncovered in depositions; and training, technical assistance, and DNA processing kits to Colombian human identification labs to streamline and improve DNA identification efficiency. Justice is also funding a project to collect DNA samples from 10,000 demobilized AUC members and enter the data into a DNA identification database, which could later be compared with DNA found at crime scenes. Additionally, funds were allocated to contract 30 attorneys to assist with the analysis and processing of thousands of complaints from paramilitary victims. Finally, Justice provided specialized criminal training in the areas of money laundering and anticorruption. Despite U.S. assistance toward improving Colombian investigative and prosecutorial capabilities, Colombian officials expressed concern that they lack the capacity to pursue criminal cases due to a lack of personnel, air mobility, and security, particularly given that most of the paramilitary killings and other AUC crimes occurred in rural areas too dangerous or too difficult to reach by road. In particular: Fiscalia and Justice officials said neither the Justice and Peace Unit nor the Human Rights Unit have enough investigators and prosecutors to fully execute their missions. For example, 45 prosecutors from the Human Rights Unit have to cover more than 4,000 assigned cases. From 2002 to 2007, the unit produced less than 400 convictions. Further, thousands of depositions and victim complaints, which Colombian officials say are likely to reveal additional crimes, have yet to be processed by the Fiscalia. As of October 2007, over 3,000 known grave sites had not been exhumed and less than half of the recovered human remains had been identified. Justice has provided assistance to expand the unit, including regional units in 7 cities outside of Bogotá. Moreover, Justice reported in September 2008 that the Human Rights Unit has received an additional 72 prosecutors and 110 investigators, but noted that more investigators are needed. According to Colombian and U.S. officials, criminal and human rights investigations and exhumation of graves often require hours and sometimes days to complete. The investigators often have to go to conflictive areas that are impossible to access without sufficient transportation resources. For example, in remote areas investigators often need army or police helicopters. The Colombian National Policehave programmed over 15,600 flying hours for their helicopters for 2008; however, police officials stated that none of these hours were allocated for Fiscalia investigations. U.S. officials confirmed Fiscalia’s need for additional transportation resources, including funding for commercial transportation as well as assets provided by Colombian security forces. From the outset of Plan Colombia, Congress made clear that it expects all U.S. support programs would eventually transition to Colombia. With the completion of Plan Colombia and the start-up of its second phase, Congress reiterated this guidance and called on State and other affected agencies to increase the pace of nationalization with a focus on the major aviation programs under Plan Colombia that are largely funded by State. In response to this guidance and budget cuts to fiscal year 2008 military assistance to Colombia instituted by Congress, State and Defense have accelerated efforts to nationalize or partly nationalize the five major Colombian military and National Police aviation programs supported by the United States. Apart from these efforts, State has taken action to nationalize portions of its nonaviation program support, and State and Defense are seeking to transfer a portion of the assistance Defense manages in other program areas to the Colombians by 2010. Justice and USAID view their efforts as extending over a longer period than U.S. support to the Colombian military and have not yet developed specific nationalization plans; however, each agency is seeking to provide its Colombian counterparts with the technical capabilities needed to manage program operations on their own. U.S. nationalization efforts collectively face the challenges of uncertain funding levels and questions pertaining to Colombia’s near-term ability to assume additional funding responsibilities. State has initiated the transfer of program funding and operations for the Army Aviation Brigade to the Colombians—by far the largest aviation program funded by State. Nationalization efforts have centered on a contractor reduction plan created by State in 2004 to eliminate the Colombians’ reliance on U.S. contract pilots and mechanics (see fig. 18). This process, however, will not be completed until at least 2012 when State expects the Colombians will have enough trained pilots and mechanics to operate the brigade on their own. Contract pilot and mechanic totals provided by State indicate that the plan is on track. U.S. officials added that the transfer of U.S. titled aircraft and the termination of U.S. support for other costs, such as parts and supplies, will occur by 2012 as part of this plan. In contrast to the Army Aviation Brigade, State has not developed contractor reduction plans for the National Police’s Air Service or Aerial Eradication Program—the second and third largest aviation programs supported by State, which work together to address U.S. and Colombian counternarcotics objectives. U.S. Embassy and State program officials explained that State’s assistance to the police is expected to continue for the indefinite future, subject to congressional funding decisions, to sustain a partnership with the police which predates Plan Colombia. However, State has taken certain steps, such as training Colombian mechanics to replace contract personnel, to reduce the Colombian’s dependence on U.S. assistance. As of June 2008, only 3 of the Colombian National Police’s Air Service 233 pilots were contract personnel, while 61 out of 422 mechanics were contractors. For the Colombian National Police’s Aerial Eradication Program, as of June 2008, 61 out of 76 pilots were contract personnel, while 166 out of 172 mechanics were contract staff. NAS plans to institute a series of efforts, including the training of spray plane mechanics, to increase the ability of the Colombians to assume a greater share of total program costs. U.S. nationalization efforts were accelerated in the wake of the fiscal year 2008 budget cuts instituted by Congress but remain focused on State funded aviation programs. Based on discussions with the Colombians beginning in 2007, the United States identified six key elements of NAS aviation programs as a starting point for accelerated nationalization efforts, which supplement the steps described above. As show in table 4, these six areas cut across U.S. supported aviation programs in Colombia. U.S. Embassy Bogotá officials estimated that these actions could result in nearly $70 million in annual program savings. NAS is currently seeking to identify additional budget savings by reducing its aerial spray program and through a wide assortment of “efficiencies” they expect to implement. State officials noted that these reductions and efficiencies will lead to diminished eradication and interdiction results. State has made significant progress in nationalizing nonaviation programs, including support for interdiction efforts (seaport, airport security, base security and roads, and Junglas operations) and programs designed to extend the Colombian government’s presence throughout the country (mainly, reestablishing a National Police presence in all municipalities); and an individual deserter program, which supplements the formal demobilization and re-integration programs managed by USAID. NAS largely describes all of these programs, with regards to U.S. funded employee or contractor involvement, as being fully nationalized or nearly nationalized with only limited U.S. oversight or technical assistance being provided. Defense nationalization efforts are managed by the U.S. military group in Bogotá. A senior military group official noted that Defense’s nationalization efforts are based on limited draw downs in Defense managed funds which include both State FMF funds and Defense’s counternarcotics budget. The U.S. government is seeking to establish a strategic partnership with Colombia by 2010 whereby the Colombian Ministry of Defense will accelerate its efforts to assume increased funding and management responsibilities for programs currently supported with U.S. military assistance. The same official noted that the Military Group has closely coordinated this nationalization strategy with the Colombian military at all levels since November 2007. According to Defense officials, the 2008 cuts in FMF and Defense funding led to a reexamination of plans to transition some program funding and implementation responsibilities to the Colombians. In line with this reexamination, the U.S. Military Group in Bogotá and State’s Bureau for Political and Military Affairs are developing a report to Congress which will detail their strategy to reduce FMF and Defense counternarcotics support over the next several years with an initial focus on 2010 when it is hoped the Colombians will reach a point of “irreversibility” with regards to security advances against the FARC and other illegal armed groups. USAID and Justice are focusing on the sustainability of projects and providing the Colombians with the technical capabilities to manage their own programs; however, neither agency has developed comprehensive transition plans. USAID and Justice efforts to transfer program and funding responsibilities differ significantly from State and Defense since, with limited exceptions, they do not have physical assets to turn over to the Colombians. Rather, their efforts center on training and capacity building to allow the Colombians to ultimately manage their own programs. USAID efforts focus on developing sustainable nonmilitary assistance programs, increasing the capacity of the government of Colombia to design and manage similar projects, and transferring any support activities, as warranted. USAID is seeking to create sustainable projects, in part, by increasing financial participation by the Colombian government, private sector, and project beneficiaries. For example, USAID alternative development projects are funded 70 percent to 90 percent by outside groups, on average, and have over $500 million in public and private funds. USAID is also seeking to increase the Colombians’ ability to design and manage their own assistance programs by involving relevant government of Colombia staff in project design and implementation activities. For example, USAID provides technical assistance to the government of Colombia on financial policy reforms that seek to expand financial services to underserved groups and isolated regions. USAID also provides training to Colombian banks, credit unions, and nongovernmental organizations to establish and expand financial services for these groups. USAID has made efforts to transfer specific program operations and funding responsibilities for several projects. For example, USAID is transferring the Human Rights Early Warning System, which was originally funded entirely by USAID. Under an agreement, the government of Colombia currently funds 30 percent of this program and is supposed to assume full operational and financial responsibilities of this program in 2011. In addition, USAID will now contribute no more than 50 percent toward the construction of Justice Houses, which were initially constructed entirely with USAID funds. Justice efforts focus on building the capacity of the Colombian government in several areas, such as increasing the ability of the government to investigate and prosecute crimes, as well as provide protection to witnesses and legal personnel. Justice officials describe the process as one of creating an enduring partnership with the Colombian government through the provision of training and technical assistance. Justice conducts many “train the trainers” programs designed to enhance the ability of the Colombian government to continuously build institutional knowledge in certain program areas. Both U.S. and Colombian officials said the congressionally mandated cuts to military assistance in 2008 and uncertainty over future years’ funding complicate the process of planning and implementing nationalization efforts. In addition, while Colombia’s economic outlook has improved in recent years, its ability to appropriate funds quickly or reallocate funds already approved is limited. State noted in its April 2008 multiyear strategy report to Congress that the fiscal year 2008 budget significantly changed the mix of U.S. assistance to Colombia by reducing eradication, interdiction, and FMF programs and increasing support for economic development, rule of law, human rights, and humanitarian assistance. The report notes agreement with Congress on the importance of increasing support for nonmilitary programs, but State expressed concern regarding Colombia’s ability to use this assistance without the security that air mobility assets provide. The report also notes State’s concern about the need to “ensure a smooth and coordinated transition of financial and operational responsibilities to the government of Colombia for interdiction, eradication, and counterterrorism programs.” The Colombian Vice Minister of Defense stressed that the budget cuts mandated by Congress could not be fully absorbed within Colombia’s current budget cycle and added that the Ministry of Defense is severely restricted in its ability to reprogram funds or request emergency spending from the central government. He also said that unplanned cuts of this magnitude put major programs at risk, in particular programs aimed at providing the Colombians with air mobility capabilities needed to support drug reduction, enhanced state presence, and a range of social and economic programs. Both U.S. and Colombian officials are working on a detailed nationalization agreement that would outline next steps, transition plans, key players and responsibilities, and potential funding sources. In line with this objective, the Colombians have formed an Office of Special Projects to head up all nationalization efforts involving the Ministry of Defense. The office Director told us that, while all prior attempts at nationalization planning have not been implemented, the government of Colombia has begun a serious effort to plan for nationalization. According to the Director, this effort includes (1) developing an inventory of all U.S. assistance provided to Colombia in order to identify potential candidates for nationalization, (2) prioritizing the list and working with the Ministry of Financing and the National Planning Department to ensure that adequate funds will be made available to finance these priority items, and (3) discussing the prioritized list with U.S. representatives. Despite an improving economy and growth in public-sector resources, the Colombians have issued a call for international assistance to help fund a portion of PCCP from 2007 through 2013 noting that even a “single year without international support would force a retreat on the important advances that have been made so far.” The call for assistance is similar to that issued by the Colombians at the outset of Plan Colombia, when internal security concerns and poor economic conditions limited the Colombian government’s ability to fund its counternarcotics and counterterrorism objectives. The PCCP plan calls for spending by Colombia to total almost $44 billion from 2007 through 2013, with $6 billion of this total devoted to counternarcotics and counterterrorism operations and the balance devoted to social, economic, and rule of law efforts. When Plan Colombia was first announced in 1999, a combination of domestic and foreign events limited Colombia’s economic growth and its ability to fully fund the costs of its plan. As noted in a November 2007 assessment by the Center for Strategic and International Studies (CSIS), Colombia’s financial system experienced a period of stress, during the late 1990s, characterized by the failure of several banks and other financial institutions, as well as by the severe deterioration of the system’s financial health. The situation was exacerbated by violent conflict and, in 1999, the country’s gross domestic product fell by 4.2 percent, the first contraction in output since the 1930s. In 2003, we reported that Colombia’s ability to provide additional funding to sustain the counternarcotics programs without a greatly improved economy was limited. Improvements in Colombia’s security environment and economy have allowed the government to significantly increase spending levels in a number of areas. Colombia’s $130 billion economy grew at 6.8 percent in 2006, the highest rate in 28 years and two points faster than the Latin American average. Colombia has reduced its inflation rate from 16.7 percent in 1998 to 4.5 percent in 2006. According to the CSIS report, Colombia has improved its economy through a combination of fiscal reforms, public debt management, reduction of inflation, and strengthening of the financial system—policies that, along with three successive International Monetary Fund arrangements, have placed the country on a path of sustainable growth while reducing poverty and unemployment. While Plan Colombia’s drug reduction goals were not fully met, U.S. assistance has helped stabilize Colombia’s internal security situation by weakening the power of illegal armed groups to hold disputed areas that largely correlate to the major coca growing regions in the country. State anticipates that billions of dollars in additional aid will need to be provided to Colombia through at least 2013 to help achieve a desired end- state where drug, security, social and economic welfare, and civil society problems reach manageable levels. One principal challenge is determining which combination of military and nonmilitary programs will have the greatest affect on combating the drug trade in Colombia. Program activities in the past have relied heavily on the use of aerial spraying as a key tool for driving down coca cultivation levels, and the vast bulk of U.S. counternarcotics assistance has gone to eradication and interdiction efforts. However, coca cultivation reduction goals were not met. As a result, Congress directed a decreased emphasis on aerial eradication, while directing that more be spent on alternative development and in other nonmilitary program areas. However, USAID does not currently measure the effect alternative development has on this goal or the extent to which its programs are self-sustaining. Congress has renewed its call for accelerated nationalization efforts on the part of State and other U.S. agencies operating in Colombia. Both State and Defense are engaged in reducing assistance for military and police programs. USAID and Justice officials agree that sustainable nonmilitary programs will take years to develop, however, both agencies have begun to nationalize some portions of their assistance. While high-level planning for nationalization has taken place and several discrete planning efforts are in place or are under development, U.S. nationalization efforts are not guided by an integrated plan that fully addresses the complex mix of agency programs, differing agency goals, and varying timetables for nationalization. Such a plan should include key milestones and future funding requirements that take into account the government of Colombia’s ability to assume program costs supported by the United States. We recommend that the Secretary of State, in conjunction with the Secretary of Defense, Attorney General, and Administrator of USAID, and in coordination with the government of Colombia, develop an integrated nationalization plan that details plans for turning over operational and funding responsibilities for U.S.-supported programs to Colombia. This plan should define U.S. roles and responsibilities for all U.S.-supported military and non-military programs. Other key plan elements should include future funding requirements; a detailed assessment of Colombia’s fiscal situation, spending priorities, and ability to assume additional funding responsibilities; and specific milestones for completing the transition to the Colombians. We also recommend that the Director of Foreign Assistance and Administrator of USAID develop performance measurements that will help USAID (1) assess whether alternative development assistance is reducing the production of illicit narcotics, and (2) determine to what extent the agency’s alternative development projects are self-sustaining. We provided a draft of this report to the departments of Defense, Homeland Security, Justice, and State; ONDCP; and USAID for their comments. Defense, State, ONDCP, and USAID provided written comments, which are reproduced in appendixes IV through VII. All except Homeland Security provided technical comments and updates, which we incorporated in the report, as appropriate. In commenting on our recommendation to the Secretary of State, State agreed that it should continue to improve the coordination of nationalization efforts among Defense, other executive branch agencies, and the government of Colombia. State noted that its annual multiyear strategy report (which it first provided to Congress in 2006) offers the most useful format to address our recommendation. While State’s annual report is useful, it does not incorporate and rationalize the complex mix of agency programs, funding plans and schedules, differing agency goals, and varying timetables for nationalization as we recommend. State did not address how it intends to address these more detailed elements with Defense, Justice, and USAID. We continue to believe that an integrated plan addressing these elements would benefit the interagency and the Congress alike, as future assistance for Colombia is considered. In commenting on our recommendation to the Administrator of USAID, USAID stated that the measures it has are sufficient to gauge progress towards its strategic goals. However, USAID went on to say that better measures/indicators to assess alternative development projects could be developed. The USAID mission in Colombia noted that it is working with the USAID missions in Bolivia and Peru, which also manage alternative development programs, to identify new indicators to help measure progress. The USAID/Colombia mission also stated that USAID/Washington should lead an effort, in conjunction with the field and other interested agencies, to develop common indicators that would enhance USAID’s ability to measure alternative development performance. We concur. In making our recommendation, we concluded that USAID’s measures were largely output indicators that did not directly address reducing illicit drug activities or the long-term sustainability of USAID’s efforts. An overall review such as USAID/Colombia suggests may help address this shortcoming. ONDCP and State commented that our draft report left the impression that little or no progress had been made with regards to Plan Colombia’s counternarcotics goal. In response, we modified the report title and specific references in the report to better reflect that some progress was made; primarily, opium poppy cultivation and heroin production were reduced by about 50 percent. However, coca cultivation and cocaine production have been the focus of Colombian and U.S. drug reduction efforts since 2000. Neither was reduced; rather, both coca cultivation and cocaine production rose from 2000 to 2006. However, at ONDCP’s suggestion, we added current information that suggests cocaine productivity (cocaine yield per hectare of coca) in Colombia has declined in recent years. Finally, ONDCP also commented that the report did not adequately address the full range of program goals associated with Plan Colombia and the progress made towards achieving these goals. We disagree. In characterizing and summarizing Plan Colombia’s goals and U.S. programs, we reviewed reports prepared by State as well as our prior reports, and discussed the goals and associated programs with U.S. officials both in Washington, D.C., and the U.S. Embassy in Bogotá, and with numerous government of Colombia officials. We addressed U.S. assistance provided for nine specific Colombian military and National Police programs to increase their operational capacity, as well as numerous State, Justice, and USAID efforts to promote social and economic justice, including alternative development, and to promote the rule of law, including judicial reform and capacity building. We also note that State, USAID, and Defense did not raise similar concerns. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees; the Secretaries of Defense and State; the Attorney General; the Director of Foreign Assistance and USAID Administrator; and the Director of ONDCP. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4268 or FordJ@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. We examined U.S. assistance efforts since 2000 when funding for Plan Colombia was first approved. Specifically, we examined (1) the progress made toward Plan Colombia’s drug reduction and enhanced security objectives; (2) program support provided to the Colombian military and National Police, including specific results and related challenges; (3) nonmilitary program support provided to Colombia, including specific results and related challenges; and (4) the status of U.S. and Colombian efforts to nationalize U.S. assistance and the challenges, if any, these efforts face. To address the progress made toward Plan Colombia’s drug reduction and enhanced security objectives, we reviewed various U.S. and Colombian government reports and met with cognizant officials to discuss trends and the nature of the data. For trends in drug cultivation, production, and flow we relied primarily on U.S. agencies’ information and officials. For trends in security data on government territorial control, homicides, kidnappings, and ridership on Colombian roads, we relied on data reported by the Colombian Ministry of Defense and other Colombian government ministries. To evaluate trends in Colombian drug cultivation and trafficking since calendar year 2000, we reviewed various studies, such as the National Drug Threat Assessment produced each year by the National Drug Intelligence Center. We reviewed various strategy documents produced by the United States that are the basis for overall drug control efforts, such as the Office of National Drug Control Policy’s (ONDCP) annual National Drug Control Strategy, and the Department of State’s (State) annual International Narcotics Control Strategy Report (INCSR). To track changes in coca cultivation and cocaine production trends in Colombia we relied on the Interagency Assessment of Cocaine Movement (IACM), an annual interagency study designed to advise policymakers and resource analysts whose responsibilities include detection, monitoring, and interdicting illegal drug shipments. To track changes in the combined amount of cocaine flowing towards the United States from Bolivia, Colombia, and Peru, we relied on IACM. Because no similar interagency flow assessments are done for heroin, we obtained estimates of production and seizures from State’s INCSR and the National Drug Threat Assessments. To understand how these estimates were developed, we discussed the studies and overall trends in the illicit drug threat from Colombia with officials from the Defense Intelligence Agency in Arlington, Virginia; the Drug Enforcement Administration in Arlington, Virginia; the Central Intelligence Agency’s Crime and Narcotics Center (CNC), Langley, Virginia; the Joint Interagency Task Force-South, Key West, Florida; and the Narcotics Affairs Section and the U.S. Military Group, U.S. Embassy, Bogotá, Colombia. We also met with and discussed these overall trends with Colombian officials in the Ministries of Defense, including the Deputy Minister of Defense. In addition, we compared the patterns and trends for the cultivation, production, and movement of cocaine and the cultivation and production of opium and noted that they were broadly consistent. We determined cultivation, production, and illicit narcotics flow data have some limitations, due in part to the illegal nature of the drug trade and the time lag inherent in collecting meaningful data. With regard to estimates of coca cultivation and cocaine production levels in Colombia, we noted that CNC expanded the number of hectares surveyed for coca cultivation beginning in 2005 in response to concerns that coca farmers were moving their operations to avoid aerial spray operations. Between 2004 and 2006, CNC’s survey area rose from 10.9 million hectares to 23.6 million hectares. This change complicates the process of comparing pre-2005 cultivation levels with later year estimates. In addition, because of methodological concerns, the IACM began reporting in 2004 its estimated flow of cocaine as a range rather than a point estimate. Notwithstanding these limitations, we determined that these data were sufficiently reliable to provide an overall indication of the relative magnitude of, and general trends in, Colombia’s illicit drug trade since 2000. To evaluate security trends, we used data provided primarily by the government of Colombia. To assess its reliability, we interviewed knowledgeable officials at the U.S. Embassy Bogotá and compared general patterns across data sets. We met with and discussed these overall trends with Colombian officials in the Ministries of Defense (including the Deputy Minister of Defense) and Justice (including the Colombian Attorney General). Some civil society representatives expressed concern that Colombian officials may be pressured to present favorable statistics, and that some information may be exaggerated. Nonetheless, U.S. officials, both in Washington, D.C., and Bogotá expressed confidence that the data illustrate overall trends that are widely accepted as accurate. U.S. officials added that while specific checks on the validity of these data are not conducted, data provided by Colombia are consistent with independent U.S. Embassy Bogotá reporting on Colombia’s political, military, and economic environment. As a result, we determined that the data were sufficiently reliable to indicate general trends in government territorial control, homicides, kidnappings, and ridership between 2000 and 2006. To assess program support provided to the Colombian military and National Police since 2000, including results and related challenges, we reviewed and analyzed congressional budget presentations, program and project status reports, our prior reports, and related information. We also reviewed program and budgetary data from the various departments and agencies in Washington, D.C., that manage these programs and met with officials responsible for these programs, including officials from State and Defense, as well as the Office of National Drug Control Policy. We met with cognizant U.S. officials at the U.S. Southern Command headquarters, Miami, Florida; State’s Office of Aviation Programs headquarters, Patrick Air Force Base, Florida; and the Joint Interagency Task Force-South, Key West, Florida. At the U.S. Embassy in Bogotá, Colombia, we met with U.S. officials with the Narcotics Affairs Section, the U.S. Military Group, and the Drug Enforcement Administration, as well as U.S.-funded contractor representatives assisting with the Colombian Army Aviation Brigade, the National Police Air Service, and the police aerial eradication program. In Bogotá, we also met with Colombian Ministry of Defense military and police commanders and other officials, including the Deputy Minister of Defense. We visited facilities and met with Colombian Army commanders at the Army’s Aviation Brigade headquarters in Tolemaida, the Counternarcotics Brigade headquarters in Larandia, and Task Force- Omega’s operating base in La Macarena; and Colombian Marine commanders at their operating base in Tumaco. We also visited facilities and met with Colombian National Police commanders and other officials at its main base in Guaymaral (near Bogotá) and a police operating base in Tumaco, where we observed an aerial eradication mission in southern Nariño. To evaluate the reliability of funding and performance data (beyond the drug cultivation, production, and flow, as well as security indicators discussed above) provided by U.S. and Colombian officials, we analyzed relevant U.S. and Colombian data sources and interviewed cognizant officials to determine the basis for reported information. We performed cross-checks of the data by comparing internal and external budget reports (such as State and Defense Congressional Budget Justifications), agency performance reports, and classified information sources. We determined that the cost and performance data provided were sufficiently reliable for the purposes of our report. To assess nonmilitary program support provided since 2000, including results and related challenges, we reviewed our prior reports along with pertinent planning, implementation, strategic, and related documents and met with cognizant U.S. officials at State and Justice and the U.S. Agency for International Development (USAID) in Washington, D.C., and the U.S. Embassy in Bogotá, Colombia. To review the progress of alternative development programs, we met USAID officials and contractors in Washington, D.C., and in Colombia. We reviewed pertinent planning documentation including USAID strategic plans for 2000-2005 and 2006- 2007, as well as progress reports produced by USAID’s prime contractor. We observed alternative development programs in the departments of Bolivar, Huila, Popayán, and Santander. To review efforts on internally displaced persons and demobilization, we met with officials from USAID, Justice, and State’s Population Refugee and Migration Bureau in Washington, D.C., and in Colombia. We interviewed government of Colombia officials from Acción Social, the National Commission on Reconciliation and Reparations, the Ministry of Interior and Justice, the Fiscalia, the Superior Council for the Judiciary, Inspector General’s Office, the Public Defenders Directorate, the Ministry of Agriculture, and the Ministry of Labor and Social Protection. We also met with the High Commissioner for Reconciliation and Reintegration in Colombia, and with civil society and private-sector representatives both in Washington, D.C., and Colombia regarding human rights issues. We observed programs in the cities of Bogotá, Cartagena, and Medellin. To evaluate the reliability of funding and performance data provided by U.S. and Colombian officials, we analyzed relevant U.S. and Colombian data sources and interviewed cognizant officials to determine the basis for reported information. We performed cross-checks of provided data against internal agency budget documents and external U.S. budget reports (such as State, USAID, and Justice Congressional Budget Justifications), agency performance reports, and Colombian reports and studies. We determined that the cost data provided by U.S. agencies was sufficiently reliable for our purposes. We did note certain limitations with regard to the performance data we received from U.S. agencies. Because of the difficult security situation in Colombia, U.S. agencies must often rely on third parties to document performance data. In particular, the USAID Office of Inspector General raised some concerns in May 2007 regarding the consistency with which alternative development performance goals had been defined, but was nevertheless able to use the data to determine whether overall goals had been met. Consequently, we determined that the data on families that have benefited from alternative development assistance, infrastructure projects completed, hectares of licit agricultural crops developed, and private- sector funds leveraged by USAID activities were sufficiently reliable to allow for broad comparisons of actual performance in 2007 against the goals that had been set, but that these data could not be used for very precise comparisons. To determine the status of U.S. and Colombian efforts to nationalize U.S. assistance, we reviewed planning and strategic documents related to nationalization, including a memorandum of understanding between the United States and Colombia regarding the transfer of programs. We met with State and Defense officials in Washington, D.C.; State’s Office of Aviation Programs at Patrick Air Force Base; and U.S. Southern Command in Florida. We met with a special consultant to State who was conducting a strategic review of State programs in Colombia. In Colombia, we met with designated U.S. Embassy Bogotá officials responsible for managing U.S. nationalization efforts, along with an ambassador appointed by State to lead negotiations with Colombia regarding current and planned steps in the nationalization process. We discussed the implications of nationalization with Colombian government officials from the National Planning Department, the Ministry of Defense (in particular, the Office of Special Projects charged with leading the ministry’s nationalization efforts), the Colombian Army and National Police, the Ministry of Interior and Justice, and AcciÓn Social. Finally, the information and observations on foreign law in this report do not reflect our independent legal analysis but are based on interviews with cognizant officials and secondary sources. State and Defense officials told us that the Army Aviation Brigade has been provided with essential support services needed to manage a modern combat aviation service, including infrastructure and maintenance support; contract pilots and mechanics; assistance to train pilots and mechanics; flight planning, safety, and quality control standards and procedures; and a logistics system. Table 5 describes these support services in more detail. Similar to the Army Aviation Brigade, State has provided key program support elements to the Colombian National Police’s Air Service. These elements include contract mechanics; mechanics training; the construction of helipads and hangers; and funding for spare parts, fuel, and other expenses. Table 6 describes these support services in more detail. As illustrated in figure 19, the estimated number of hectares of coca under cultivation in Bolivia, Colombia, and Peru has varied since 2000 from an estimated 187,500 hectares to 233,000 hectares in 2007, and averaged about 200,000 hectares since 2000. As noted in our report, these changes were due, at least in part, to the Crime and Narcotics Center’s decision to increase the size of the coca cultivation survey areas in Colombia from 2004 to 2006. The U.S. interagency counternarcotics community uses the number of hectares of coca under cultivation to help estimate the amount of 100 percent pure cocaine that can be produced in each country. Essentially, the community calculates production efficiency rates for turning coca leaf into cocaine and applies it to the total number of hectares under cultivation. As illustrated in figure 20, the total amount of estimated pure cocaine produced in Bolivia, Colombia, and Peru has fluctuated since 2000 but has risen from 770 metric tons in 2000 to 865 metric tons in 2007, and averaged about 860 metric tons per year since 2000. In 2008, the interagency counternarcotics community reduced Colombia’s estimated cocaine production efficiency rate for the years 2003 through 2007. The community attributed the reduced efficiency to Colombia’s efforts to eradicate coca. However, according to Drug Enforcement Administration officials, the interagency had also raised the production efficiency rate in Peru for 2002 through 2005 due to better processing techniques, which offset much of the reduction in Colombia. The Drug Enforcement Administration also noted that it has not reassessed the cocaine production efficiency rate in Bolivia since 1993, but expects that Bolivia has improved its processing techniques and is producing more pure cocaine than the interagency has estimated. Following are GAO’s comments on the Department of Defense’s comment letter dated September 17, 2008. 1. The transfer of these assets was not highlighted as a significant example of nationalization during the course of our review when we met with Defense officials in Washington, D.C., or the U.S. Military Group in Bogotá. Nonetheless, we added a statement to report progress in this area. 2. We incorporated Defense’s observation that the Strategic Partner Transition Plan addresses both Foreign Military Financing and Defense counternarcotics funding. As noted in our report, however, State’s Political-Military Bureau declined to provide us a copy of the plan until it is formally released to Congress. As a result, we were not able to independently assess the plan’s content and scope. Following are GAO’s comments on the State Department’s comment letter dated September 17, 2008. 1. We included additional information on coca cultivation and cocaine production patterns in the final report. We also note that 2007 coca cultivation and cocaine production data did not become available until after this report was released for agency comments, and we have added it, as appropriate. Following are GAO’s comments on the Office of National Drug Control Policy’s comment letter dated September 17, 2008. 1. We disagree. In characterizing and summarizing Plan Colombia’s goals and U.S. programs, we reviewed reports prepared by State as well as our prior reports, and discussed the goals and associated programs with U.S. officials both in Washington, D.C., and the U.S. Embassy in Bogotá, and with numerous government of Colombia officials. We addressed U.S. assistance provided for nine specific Colombian military and National Police programs to increase their operational capacity, as well as, numerous State, Justice, and USAID efforts to promote social and economic justice, including alternative development, and to promote the rule of law, including judicial reform and capacity building. We also note that State, USAID, and Defense did not raise similar concerns. 2. The drop in potential cocaine production that ONDCP cites compares 2001 (when coca cultivation and production peaked) to 2007. Our report compares 2000 (when U.S. funding for Plan Colombia was first approved) to 2006 (Plan Colombia’s drug reduction goal was tied to 6- year time period). We also note that 2007 coca cultivation and cocaine production data did not become available until after this report was released for agency comments, and we have added it, as appropriate. Following are GAO’s comments on the U. S. Agency for International Development’s comment letter dated September 11, 2008. 1. We modified the report to note USAID has initiated nationalization efforts for each of its major program areas and several major projects. However, we note that USAID’s nationalization efforts are program and project specific and are not integrated with the range of other U.S. government efforts, as we recommend should be done. 2. We believe we fairly characterized USAID’s assistance role in the counternarcotics strategy for Colombia. However, we did not intend to imply that USAID alternative development programs are social programs. We intended to note that USAID’s assistance supports social infrastructure, such as schools and other community projects. We clarified the text where appropriate. 3. We only intended to note that most coca growing areas do not receive USAID assistance for various reasons, including restrictions by the government of Colombia. USAID resources are scarce and must be deployed to the areas most likely to achieve sustainable results. We added text to note that the majority of the Colombian population lives within the geographic areas where USAID operates. However, the fact that the majority of coca is cultivated outside of USAID’s economic corridors poses challenges for USAID’s strategic goal of reducing the production of illegal drugs. 4. We endorse and commend USAID/Colombia’s attempt to work at both the mission level and with USAID/Washington to develop common indicators which would enhance USAID’s ability to assess the performance of alternative development projects. 5. We recognize key indicators such as increased gross market value and number of families benefited are useful in determining the impact of USAID programs at a family or farm level. However, these indicators do not measure the sustainability of the projects, such as whether families or businesses have continued in legal productive activities after USAID assistance has ended. 6. We agree that outside support for USAID alternative development projects is a key component of creating self-sustaining projects. However, this point does not address the fact that USAID does not currently collect and report data on whether USAID supported activities continue after its involvement ends. In addition to the above named individual, A.H. Huntington, III, Assistant Director; Joseph Carney, Jonathan Fremont, Emily Gupta, Jose Peña, an Michael ten Kate made key contributions to this report. Technical assistance was provided by Joyce Evans, Jena Sinkfield, and Cynthia Taylor. Drug Control: Cooperation with Many Major Drug Transit Countries Has Improved, but Better Performance Reporting and Sustainability Plans Are Needed. GAO-08-784. Washington, D.C.: July 15, 2008. Drug Control: U.S. Assistance Has Helped Mexican Counternarcotics Efforts, but the Flow of Illicit Drugs into the United States Remains High. GAO-08-215T. Washington, D.C.: October 25, 2007. Drug Control: U.S. Assistance Has Helped Mexican Counternarcotics Efforts, but Tons of Illicit Drugs Continue to Flow into the United States, GAO-07-1018. Washington, D.C.: August 17, 2007. State Department: State Has Initiated a More Systematic Approach for Managing Its Aviation Fleet. GAO-07-264. Washington, D.C.: February 2, 2007. Drug Control: Agencies Need to Plan for Likely Declines in Drug Interdiction Assets, and Develop Better Performance Measures for Transit Zone Operations. GAO-06-200. Washington, D.C.: November 15, 2005. Security Assistance: Efforts to Secure Colombia’s Caño Limón-Coveñas Oil Pipeline Have Reduced Attacks, but Challenges Remain. GAO-05-971. Washington, D.C.: September 6, 2005. Drug Control: Air Bridge Denial Program in Colombia Has Implemented New Safeguards, but Its Effect on Drug Trafficking Is Not Clear. GAO-05-970. Washington, D.C.: September 6, 2005. Drug Control: U.S. Nonmilitary Assistance to Colombia Is Beginning to Show Intended Results, but Programs Are Not Readily Sustainable. GAO-04-726. Washington, D.C.: July 2, 2004. Drug Control: Aviation Program Safety Concerns in Colombia Are Being Addressed, but State’s Planning and Budgeting Process Can Be Improved. GAO-04-918. Washington, D.C.: July 29, 2004. Drug Control: Specific Performance Measures and Long-Term Costs for U.S. Programs in Colombia Have Not Been Developed. GAO-03-783. Washington, D.C.: June 16, 2003. Drug Control: Financial and Management Challenges Continue to Complicate Efforts to Reduce Illicit Drug Activities in Colombia. GAO-03-820T. Washington, D.C.: June 3, 2003. Drug Control: Coca Cultivation and Eradication Estimates in Colomb GAO-03-319R. Washington, D.C.: January 8, 2003. ia. Drug Control: Efforts to Develop Alternatives to Cultivating Illicit Crops in Colombia Have Made Little Progress and Face Serious Obstacles. GAO-02-291. Washington, D.C.: February 8, 2002. Drug Control: Difficulties in Measuring Costs and Results of Transit Zone Interdiction Efforts. GAO-02-13. Washington, D.C.: January 25, 2002. Drug Control: State Department Provides Required Aviation Program Support, but Safety and Security Should Be Enhanced. GAO-01-1021 Washington, D.C.: September 14, 2001. Drug Control: U.S. Assistance to Colombia Will Take Years to Produce Results. GAO-01-26. Washington, D.C.: October 17, 2000. Drug Control: Challenges in Implementing Plan Colombia. GAO-01-76T. Washington, D.C.: October 12, 2000. Drug Control: U.S. Efforts in Latin America and the Caribbean. GAO/NSIAD-00-90R. Washington, D.C.: February 18, 2000.
In September 1999, the government of Colombia announced a strategy, known as "Plan Colombia," to (1) reduce the production of illicit drugs (primarily cocaine) by 50 percent in 6 years and (2) improve security in Colombia by re-claiming control of areas held by illegal armed groups. Since fiscal year 2000, the United States has provided over $6 billion to support Plan Colombia. The Departments of State, Defense, and Justice and the U.S. Agency for International Development (USAID) manage the assistance. GAO examined (1) the progress made toward Plan Colombia's drug reduction and enhanced security objectives, (2) the results of U.S. aid for the military and police, (3) the results of U.S. aid for non-military programs, and (4) the status of efforts to "nationalize" or transfer operations and funding responsibilities for U.S.-supported programs to Colombia. Plan Colombia's goal of reducing the cultivation, processing, and distribution of illegal narcotics by 50 percent in 6 years was not fully achieved. From 2000 to 2006, opium poppy cultivation and heroin production declined about 50 percent, while coca cultivation and cocaine production levels increased by about 15 and 4 percent, respectively. These increases, in part, can be explained by measures taken by coca farmers to counter U.S. and Colombian eradication efforts. Colombia has improved its security climate through systematic military and police engagements with illegal armed groups and by degrading these groups' finances. U.S. Embassy Bogot? officials cautioned that these security gains will not be irreversible until illegal armed groups can no longer threaten the stability of the government of Colombia, but become a law enforcement problem requiring only police attention. Since fiscal year 2000, State and Defense provided nearly $4.9 billion to the Colombian military and National Police. Notably, over 130 U.S.-funded helicopters have provided the air mobility needed to rapidly move Colombian counternarcotics and counterinsurgency forces. U.S. advisors, training, equipment, and intelligence assistance have also helped professionalize Colombia's military and police forces, which have recorded a number of achievements including the aerial and manual eradication of hundreds of thousands of hectares of coca, the seizure of tons of cocaine, and the capture or killing of a number of illegal armed group leaders and thousands of combatants. However, these efforts face several challenges, including countermeasures taken by coca farmers to combat U.S. and Colombian eradication efforts. Since fiscal year 2000, State, Justice, and USAID have provided nearly $1.3 billion for a wide range of social, economic, and justice sector programs. These programs have had a range of accomplishments, including aiding internally displaced persons and reforming Colombia's justice sector. But some efforts have been slow in achieving their objectives while others are difficult to assess. For example, the largest share of U.S. non-military assistance has gone towards alternative development, which has provided hundreds of thousands of Colombians legal economic alternatives to the illicit drug trade. But, alternative development is not provided in most areas where coca is cultivated and USAID does not assess how such programs relate to its strategic goals of reducing the production of illicit drugs or achieving sustainable results. In response to congressional direction in 2005 and budget cuts in fiscal year 2008, State and the other U.S. departments and agencies have accelerated their nationalization efforts, with State focusing on Colombian military and National Police aviation programs. One aviation program has been nationalized and two are in transition, with the largest--the Army Aviation Brigade--slated for turnover by 2012. Two National Police aviation programs have no turnover dates established. State, Defense, Justice, and USAID each have their own approaches to nationalization, with different timelines and objectives that have not been coordinated to promote potential efficiencies.
In response to global challenges the government faces in the coming years, we have a unique opportunity to create an extremely effective and performance-based organization that can strengthen the nation’s ability to protect its borders and citizens against terrorism. There is likely to be considerable benefit over time from restructuring some of the homeland security functions, including reducing risk and improving the economy, efficiency, and effectiveness of these consolidated agencies and programs. Realistically, however, in the short term, the magnitude of the challenges that the new department faces will clearly require substantial time and effort, and will take additional resources to make it fully effective. The Comptroller General has testified that the Congress should consider several very specific criteria in its evaluation of whether individual agencies or programs should be included or excluded from the proposed department. Those criteria include the following: Mission Relevancy: Is homeland security a major part of the agency or program mission? Is it the primary mission of the agency or program? Similar Goals and Objectives: Does the agency or program being considered for the new department share primary goals and objectives with the other agencies or programs being consolidated? Leverage Effectiveness: Does the agency or program being considered for the new department promote synergy and help to leverage the effectiveness of other agencies and programs or the new department as a whole? In other words, is the whole greater than the sum of the parts? Gains Through Consolidation: Does the agency or program being considered for the new department improve the efficiency and effectiveness of homeland security missions through eliminating duplications and overlaps, closing gaps, and aligning or merging common roles and responsibilities? Integrated Information Sharing/Coordination: Does the agency or program being considered for the new department contribute to or leverage the ability of the new department to enhance the sharing of critical information or otherwise improve the coordination of missions and activities related to homeland security? Compatible Cultures: Can the organizational culture of the agency or program being considered for the new department effectively meld with the other entities that will be consolidated? Field structures and approaches to achieving missions vary considerably between agencies. Impact on Excluded Agencies: What is the impact on departments losing components to the new department? What is the impact on agencies with homeland security missions left out of the new department? In the President’s proposal, the new Department of Homeland Security would be responsible for conducting a national scientific research and development program, including developing national policy and coordinating the federal government’s civilian efforts to counter chemical, biological, radiological, and nuclear weapons or other emerging terrorist threats. The new department would carry out its civilian health-related biological, biomedical, and infectious disease defense research and development through agreements with HHS, unless otherwise directed by the President. As part of this responsibility, the new department would establish priorities and direction for programs of basic and applied research on the detection, treatment, and prevention of infectious diseases such as those programs conducted by NIH. NIH supports and carries out biomedical research to study, prevent, and treat infectious and immunologic human diseases. Infectious diseases include those caused by new, emerging, and reemerging infectious agents, including those that are intentionally introduced as an act of bioterrorism. The emphasis of antiterrorism research supported by NIH has been in four areas: (1) design and testing of new diagnostic tools; (2) design, development, and clinical evaluation of therapies; (3) design, development, and clinical evaluation of vaccines; and (4) other basic research, including genome sequencing. The President’s proposal also would transfer the select agent program from HHS to the new department. Currently administered by CDC, this program’s mission is ensuring the security of those biologic agents that pose a severe threat to public health and safety and could be used by terrorists. The proposal provides for the new department to consult with appropriate agencies, which would include HHS, in maintaining the select agent list and to consult with HHS in carrying out the program. The proposed Department of Homeland Security would be tasked with developing national policy for and coordinating the federal government’s civilian research and development efforts to counter chemical, biological, radiological, and nuclear threats. The new department also could improve coordination of biomedical research and development efforts. In addition to coordination, the role of the new department would need to include forging collaborative relationships with programs at all levels of government and developing a strategic plan for research and development. We have previously reported that the limited coordination among federal research and development programs may result in a duplication of efforts. Coordination is hampered by the extent of compartmentalization of efforts because of the sensitivity of the research and development programs, security classification of research, and the absence of a single coordinating entity to help prevent duplication. For example, the Department of Defense’s (DOD) Defense Advanced Research Projects Agency was unaware of U.S. Coast Guard plans to develop methods to detect a biological agent on an infected cruise ship and therefore was unable to share information on its research to develop biological detection devices that could have been applicable to buildings infected this way. The new department would need to develop mechanisms to coordinate and integrate information about ongoing research and development being performed across the government related to chemical, biological, radiological, and nuclear terrorism, as well as harmonize user needs. Although the proposal tasks the new department with coordinating the federal government’s “civilian efforts” only, the new department also would need to coordinate with DOD because DOD conducts biomedical research and development efforts designed to detect and respond to weapons of mass destruction. Although DOD’s efforts are geared toward protecting armed services members, they may also be applicable to the civilian population. Currently, NIH is working with DOD on biomedical research and development efforts, and it is important for this collaboration to continue. An example of NIH and DOD’s efforts is their support of databases to compare the sequences and functions of poxvirus genes. These searchable databases enable researchers to select targets for designing antiviral drugs and vaccines, and serve as repositories for information on well documented poxvirus strains to aid in detection and diagnosis. The President’s proposal could help improve coordination of federal research and development by giving one person the responsibility for a single national research and development strategy that could address coordination, reduce potential duplication, and ensure that important issues are addressed. In 2001, we recommended the creation of a unified strategy to reduce duplication and leverage resources, and suggested that the plan be coordinated with federal agencies performing the research as well as with state and local authorities. Such a plan would help to ensure that research gaps are filled, unproductive duplication is minimized, and that individual agency plans are consistent with the overall goals. We are concerned about the implications of the proposed transfer of control and priority setting for dual-purpose research programs. For example, some research programs have broad missions that are not easily separated into homeland security research and research for other purposes. We are concerned that such dual-purpose research activities may lose the synergy arising from their current placement. The President’s proposal would transfer the responsibility for civilian biomedical defense research and development programs to the new department, but the programs would continue to be carried out through HHS. These programs, now primarily sponsored by NIH, include a variety of efforts to understand basic biological mechanisms of infection and to develop and test rapid diagnostic tools, vaccines, and antibacterial and antiviral drugs. These efforts have dual-purpose applicability. The scientific research on biologic agents that could be used by terrorists cannot be readily separated from research on emerging infectious diseases. For example, research being carried out on antiviral drugs in the NIH biodefense research program is expected to be useful in the development of treatments for hepatitis C. NIH biodefense research on enhanced immunologic responses to protect against infection and disease is critical in the development of interventions against both naturally occurring and man-made pathogens. The proposal to transfer to the new department responsibility for research and development programs that would continue to be carried out by HHS raises many concerns. Although there is a clear need for the new department to have responsibility for setting policy, developing a strategy, providing leadership, and coordinating research and development efforts in these areas, we are concerned that control and priority-setting responsibility will not be vested in those programs best positioned to understand the potential of basic research efforts or the relevance of research being carried out in other, nonbiodefense programs. For example, NIH-funded research on a drug to treat cytomegalovirus complications in patients with HIV is now being investigated as a prototype for developing antiviral drugs against smallpox. There is the potential that the proposal would allow the new department to direct, fund, and conduct research related to chemical, biological, radiological, nuclear, and other emerging threats on its own. This raises the potential for duplication of effort, lack of efficiency, and an increased need for coordination with other departments that would continue to carry out relevant research. Design and implementation of a research agenda is most efficient at the level of the mission agency where scientific and technical expertise resides. Building and duplicating the existing facilities and expertise in the current federal laboratories needed to conduct this research would be inefficient. The proposal would transfer the Laboratory Registration/Select Agent Transfer Program from HHS to the new department. The select agent program, recently revised and expanded by the Public Health Security and Bioterrorism Preparedness and Response Act of 2002, generally requires the registration of persons and laboratory facilities possessing specific biologic agents and toxins—called select agents—that have the potential to pose a serious threat to public health and safety. Select agents include approximately 40 viruses, bacteria, rickettsia, fungi, and toxins. Examples include Ebola, anthrax, botulinum, and ricin. The 2002 act expanded the program’s requirements to include facilities that possess the agents as well as the facilities that transfer the agents. The mission of the select agent program appears to be closely aligned with homeland security. As we stated earlier, one key consideration in evaluating whether individual agencies or programs should be included or excluded from the proposed department is the extent to which homeland security is a major part of the agency or program mission. By these criteria, the transfer of the select agent program would enhance efficiency and accountability. The President’s proposal would address some shortcomings noted earlier in this statement. Better coordination could reduce wasteful duplication and increase efficiency. The mission of the select agent program is aligned with the new department and, therefore, the transfer of the program would enhance efficiency and accountability. However, we are concerned about the broad control the proposal grants to the new department for biomedical research and development. Although there is a need to coordinate these activities with the other homeland security preparedness and response programs that would be brought into the new department, there is also a need to maintain the priorities for current dual-purpose biomedical research. The President’s proposal does not adequately address how to accomplish both objectives or how to maintain a priority- setting role for those best positioned to understand the relevance of biomedical research. We are also concerned that the proposal has the potential to create an unnecessary duplication of federal research capacity. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact me at (202) 512-7118. Robert Copeland, Marcia Crosse, and Deborah Miller also made key contributions to this statement. Homeland Security: Intergovernmental Coordination and Partnership Will Be Critical to Success. GAO-02-901T. Washington, D.C.: July 3, 2002. Homeland Security: Intergovernmental Coordination and Partnership Will Be Critical to Success. GAO-02-900T. Washington, D.C.: July 2, 2002. Homeland Security: Intergovernmental Coordination and Partnership Will Be Critical to Success. GAO-02-899T. Washington, D.C.: July 1, 2002. Homeland Security: New Department Could Improve Coordination but May Complicate Priority Setting. GAO-02-893T. Washington, D.C.: June 28, 2002. Homeland Security: Proposal for Cabinet Agency Has Merit, but Implementation Will Be Pivotal to Success. GAO-02-886T. Washington, D.C.: June 25, 2002. Homeland Security: New Department Could Improve Coordination but May Complicate Public Health Priority Setting. GAO-02-883T. Washington, D.C.: June 25, 2002. Homeland Security: Key Elements to Unify Efforts Are Underway but Uncertainty Remains. GAO-02-610. Washington, D.C.: June 7, 2002. Homeland Security: Responsibility and Accountability for Achieving National Goals. GAO-02-627T. Washington, D.C.: April 11, 2002. Homeland Security: Progress Made; More Direction and Partnership Sought. GAO-02-490T. Washington, D.C.: March 12, 2002. Homeland Security: Challenges and Strategies in Addressing Short- and Long-Term National Needs. GAO-02-160T. Washington, D.C.: November 7, 2001. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts. GAO-02-208T. Washington, D.C.: October 31, 2001. Homeland Security: Need to Consider VA’s Role in Strengthening Federal Preparedness. GAO-02-145T. Washington, D.C.: October 15, 2001. Homeland Security: Key Elements of a Risk Management Approach. GAO-02-150T. Washington, D.C.: October 12, 2001. Homeland Security: A Framework for Addressing the Nation’s Efforts. GAO-01-1158T. Washington, D.C.: September 21, 2001. Bioterrorism: The Centers for Disease Control and Prevention’s Role in Public Health Protection. GAO-02-235T. Washington, D.C.: November 15, 2001. Bioterrorism: Review of Public Health Preparedness Programs. GAO-02- 149T. Washington, D.C.: October 10, 2001. Bioterrorism: Public Health and Medical Preparedness. GAO-02-141T. Washington, D.C.: October 9, 2001. Bioterrorism: Coordination and Preparedness. GAO-02-129T. Washington, D.C.: October 5, 2001. Bioterrorism: Federal Research and Preparedness Activities. GAO-01- 915. Washington, D.C.: September 28, 2001. Chemical and Biological Defense: Improved Risk Assessment and Inventory Management Are Needed. GAO-01-667. Washington, D.C.: September 28, 2001. West Nile Virus Outbreak: Lessons for Public Health Preparedness. GAO/HEHS-00-180. Washington, D.C.: September 11, 2000. Chemical and Biological Defense: Program Planning and Evaluation Should Follow Results Act Framework. GAO/NSIAD-99-159. Washington, D.C.: August 16, 1999. Combating Terrorism: Observations on Biological Terrorism and Public Health Initiatives. GAO/T-NSIAD-99-112. Washington, D.C.: March 16, 1999. National Preparedness: Technologies to Secure Federal Buildings. GAO- 02-687T. Washington, D.C.: April 25, 2002. National Preparedness: Integration of Federal, State, Local, and Private Sector Efforts Is Critical to an Effective National Strategy for Homeland Security. GAO-02-621T. Washington, D.C.: April 11, 2002. Combating Terrorism: Intergovernmental Cooperation in the Development of a National Strategy to Enhance State and Local Preparedness. GAO-02-550T. Washington, D.C.: April 2, 2002. Combating Terrorism: Enhancing Partnerships Through a National Preparedness Strategy. GAO-02-549T. Washington, D.C.: March 28, 2002. Combating Terrorism: Critical Components of a National Strategy to Enhance State and Local Preparedness. GAO-02-548T. Washington, D.C.: March 25, 2002. Combating Terrorism: Intergovernmental Partnership in a National Strategy to Enhance State and Local Preparedness. GAO-02-547T. Washington, D.C.: March 22, 2002. Combating Terrorism: Key Aspects of a National Strategy to Enhance State and Local Preparedness. GAO-02-473T. Washington, D.C.: March 1, 2002. Chemical and Biological Defense: DOD Should Clarify Expectations for Medical Readiness. GAO-02-219T. Washington, D.C.: November 7, 2001. Anthrax Vaccine: Changes to the Manufacturing Process. GAO-02-181T. Washington, D.C.: October 23, 2001. Chemical and Biological Defense: DOD Needs to Clarify Expectations for Medical Readiness. GAO-02-38. Washington, D.C.: October 19, 2001. Combating Terrorism: Considerations for Investing Resources in Chemical and Biological Preparedness. GAO-02-162T. Washington, D.C.: October 17, 2001. Combating Terrorism: Selected Challenges and Related Recommendations. GAO-01-822. Washington, D.C.: September 20, 2001. Combating Terrorism: Actions Needed to Improve DOD Antiterrorism Program Implementation and Management. GAO-01-909. Washington, D.C.: September 19, 2001. Combating Terrorism: Comments on H.R. 525 to Create a President’s Council on Domestic Terrorism Preparedness. GAO-01-555T. Washington, D.C.: May 9, 2001. Combating Terrorism: Accountability Over Medical Supplies Needs Further Improvement. GAO-01-666T. Washington, D.C.: May 1, 2001. Combating Terrorism: Observations on Options to Improve the Federal Response. GAO-01-660T. Washington, DC: April 24, 2001. Combating Terrorism: Accountability Over Medical Supplies Needs Further Improvement. GAO-01-463. Washington, D.C.: March 30, 2001. Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy. GAO-01-556T. Washington, D.C.: March 27, 2001. Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response. GAO-01-15. Washington, D.C.: March 20, 2001. Combating Terrorism: Federal Response Teams Provide Varied Capabilities; Opportunities Remain to Improve Coordination. GAO-01- 14. Washington, D.C.: November 30, 2000. Combating Terrorism: Need to Eliminate Duplicate Federal Weapons of Mass Destruction Training. GAO/NSIAD-00-64. Washington, D.C.: March 21, 2000. Combating Terrorism: Chemical and Biological Medical Supplies Are Poorly Managed. GAO/T-HEHS/AIMD-00-59. Washington, D.C.: March 8, 2000. Combating Terrorism: Chemical and Biological Medical Supplies Are Poorly Managed. GAO/HEHS/AIMD-00-36. Washington, D.C.: October 29, 1999. Combating Terrorism: Observations on the Threat of Chemical and Biological Terrorism. GAO/T-NSIAD-00-50. Washington, D.C.: October 20, 1999. Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks. GAO/NSIAD-99-163. Washington, D.C.: September 14, 1999. Chemical and Biological Defense: Coordination of Nonmedical Chemical and Biological R&D Programs. GAO/NSIAD-99-160. Washington, D.C.: August 16, 1999. Combating Terrorism: Use of National Guard Response Teams Is Unclear. GAO/T-NSIAD-99-184. Washington, D.C.: June 23, 1999. Combating Terrorism: Observations on Growth in Federal Programs. GAO/T-NSIAD-99-181. Washington, D.C.: June 9, 1999. Combating Terrorism: Analysis of Potential Emergency Response Equipment and Sustainment Costs. GAO/NSIAD-99-151. Washington, D.C.: June 9, 1999. Combating Terrorism: Use of National Guard Response Teams Is Unclear. GAO/NSIAD-99-110. Washington, D.C.: May 21, 1999. Combating Terrorism: Observations on Federal Spending to Combat Terrorism. GAO/T-NSIAD/GGD-99-107. Washington, D.C.: March 11, 1999. Combating Terrorism: Opportunities to Improve Domestic Preparedness Program Focus and Efficiency. GAO/NSIAD-99-3. Washington, D.C.: November 12, 1998. Combating Terrorism: Observations on the Nunn-Lugar-Domenici Domestic Preparedness Program. GAO/T-NSIAD-99-16. Washington, D.C.: October 2, 1998. Combating Terrorism: Observations on Crosscutting Issues. GAO/T- NSIAD-98-164. Washington, D.C.: April 23, 1998. Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments. GAO/NSIAD-98-74. Washington, D.C.: April 9, 1998. Combating Terrorism: Spending on Governmentwide Programs Requires Better Management and Coordination. GAO/NSIAD-98-39. Washington, D.C.: December 1, 1997. Disaster Assistance: Improvement Needed in Disaster Declaration Criteria and Eligibility Assurance Procedures. GAO-01-837. Washington, D.C.: August 31, 2001. Chemical Weapons: FEMA and Army Must Be Proactive in Preparing States for Emergencies. GAO-01-850. Washington, D.C.: August 13, 2001.
Title III of the proposed Homeland Security Act of 2002 would transfer responsibility for certain chemical, biological, radiological, and nuclear research and development programs and activities to the new department. The proposed Department of Homeland Security would develop national policy for, and coordination of, the federal government's civilian research and development efforts to counter chemical, biological, radiological, and nuclear threats. Although the new department could improve coordination of existing research and development programs, the proposed transfer of control and priority setting for research from the organizations where the research would be conducted could be disruptive. Transferring control over these programs, including priority setting, to the new department has the potential to disrupt some programs that are critical to basic public health. The President's proposal is not clear on how both the homeland security and the biomedical research objectives would be accomplished. However, if an agency's mission fits with homeland security, its transfer to the new department is appropriate.
Figure 1 compares original cost estimates and current cost estimates for the broader portfolio of major space acquisitions for fiscal years 2008 through 2013. The wider the gap between original and current estimates, the fewer dollars DOD has available to invest in new programs. As shown in the figure, estimated costs for the major space acquisition programs have increased by about $10.9 billion from initial estimates for fiscal years 2008 through 2013. The declining investment in the later years is the result of the Evolved Expendable Launch Vehicle (EELV) program no longer being considered a major acquisition program and the cancellation and proposed cancellation of two development efforts which would have significantly increased DOD’s major space acquisition investment. Figures 2 and 3 reflect differences in total life-cycle costs and unit costs for satellites from the time the programs officially began to their most recent cost estimate. As figure 3 notes, in several cases, DOD has had to cut back on quantity and capability in the face of escalating costs. For example, two satellites and four instruments were deleted from National Polar-orbiting Operational Environmental Satellite System (NPOESS) and four sensors are expected to have fewer capabilities. This will reduce some planned capabilities for NPOESS as well as planned coverage. Figure 4 highlights the additional estimated months needed to complete programs. These additional months represent time not anticipated at the programs’ start dates. Generally, the further schedules slip, the more DOD is at risk of not sustaining current capabilities. For this reason, DOD began a follow-on system effort, known as the Third Generation Infrared Satellite to run in parallel with the Space Based Infrared System (SBIRS) program. This fiscal year, DOD launched the second Wideband Global SATCOM (WGS) satellite. WGS had previously been experiencing technical and other problems, including improperly installed fasteners and data transmission errors. When DOD finally resolved these issues, it significantly advanced capability available to warfighters. Additionally, the EELV program had its 23rd consecutive successful operational launch earlier this month. However, other major space programs have had setbacks. For example: In September 2008, the Air Force reported a Nunn-McCurdy unit cost breach of the critical cost growth threshold for the Advanced Extremely High Frequency (AEHF) satellite because of cost growth brought on by technical issues, schedule delays, and increased costs for the procurement of a fourth AEHF satellite. The launch of the first satellite has slipped further by almost 2 years from November 2008 to as late as September 2010. Further, the program office estimates that the fourth AEHF satellite could cost more than twice the third satellite because some components that are no longer manufactured will have to be replaced and production will have to be restarted after a 4-year gap. Because of these delays, initial operational capability has slipped 3 years—from 2010 to 2013. The Mobile User Objective System (MUOS) communications satellite estimates an 11-month delay—from March 2010 to February 2011—in the delivery of on-orbit capability from the first satellite. Further, contractor costs for the space segment have increased about 48 percent because of the additional labor required to address issues related to satellite design complexity, satellite weight, and satellite component test anomalies and associated rework. Despite the contractor cost increases, the program has been able to remain within its baseline program cost estimate. The Global Positioning System (GPS) IIF satellite is now expected to be delayed almost 3 years from its original date to November 2009. Also, the cost of GPS IIF is now expected to be about $1.6 billion—about $870 million over the original cost estimate of $729 million. (This approximately 119 percent cost increase is not that noticeable in figures 2 and 3 because the GPS II modernization program includes the development and procurement of 33 satellites, only 12 of which are IIF satellites.) The Air Force has had difficulty in the past building GPS satellites within cost and schedule goals because of significant technical problems, which still threaten its delivery schedule and because of challenges it faced with a different contractor for the IIF program, which did not possess the same expertise as the previous GPS contractor. Further, while the Air Force is structuring the new GPS IIIA program to prevent mistakes made on the IIF program, the Air Force is aiming to deploy the GPS IIIA satellites 3 years faster than the IIF satellites. We believe the IIIA schedule is optimistic given the program’s late start, past trends in space acquisitions, and challenges facing the new contractor. Total program cost for the SBIRS program is estimated around $12.2 billion, an increase of $7.5 billion over the original program cost, which included 5 geosynchronous earth orbit (GEO) satellites. The first GEO satellite has been delayed roughly 7 years in part because of poor oversight, technical complexities, and rework. Although the program office set December 2009 as the new launch goal for the satellite, a recent assessment by the Defense Contract Management Agency anticipates an August 2010 launch date, adding an additional 8 months to the previous launch estimate. Subsequent GEO satellites have also slipped as a result of the flight software design issues. The NPOESS program has experienced problems with replenishing its aging constellation of satellites and was restructured in July 2007 in response to a Nunn-McCurdy unit cost breach of the critical cost growth threshold. The program was originally estimated to cost about $6.5 billion for six satellites from 1995 through 2018. The restructured program called for reducing the number of satellites from six to four and included an overall increase in program costs, delays in satellite launches, and deletions or replacements of satellite sensors. Although the number of satellites has been reduced, total costs have increased by almost 108 percent since program start. Specifically, the current estimated life cycle cost of the restructured program is now about $13.5 billion for four satellites through 2026. This amount is higher than what is reflected in figure 2 as it represents the most recent GAO estimate as opposed to the DOD estimates used in the figure. We reported last year that poor workmanship and testing delays caused an 8-month slip in the delivery of a complex imaging sensor. This late delivery caused a delay in the expected launch date of a demonstration satellite, moving it from late September 2009 to early January 2011. This year it is also becoming more apparent that space acquisition problems are leading to potential gaps in the delivery of critical capabilities. For example, DOD faces a potential gap in protected military communications caused by delays in the AEHF program and the proposed cancellation of the TSAT program, which itself posed risks in schedule delays because of TSAT’s complexity and funding cuts designed to ensure technology objectives were achievable. DOD faces a potential gap in ultra high frequency (UHF) communications capability caused by the unexpected failures of two satellites already in orbit and the delays resulting from the MUOS program. DOD also faces potential gaps or decreases in positioning, navigation and timing capabilities because of late delivery of the GPS IIF satellites and the late start of the GPS IIIA program. There are also concerns about potential gaps in missile warning and weather monitoring capabilities because of delays in SBIRS and NPOESS. Addressing gaps in any one of these areas is not a simple matter. While there may be opportunities to build less complex “gap filler” satellites, for example, these still require time and money that may not be readily available because of commitments to the longer-term programs. There may also be opportunities to continue production of “older” generation satellites, but such efforts also require time and money that may not be readily available and may face other challenges such as restarting production lines and addressing issues related to obsolete parts and materials. Further, satellites on orbit can be made to last longer by turning power off at certain points in time, but this may also present unacceptable tradeoffs in capability. Our past work has identified a number of causes behind the cost growth and related problems, but several consistently stand out. First, on a broad scale, DOD starts more weapon programs than it can afford, creating a competition for funding that encourages low cost estimating, optimistic scheduling, overpromising, suppressing bad news, and, for space programs, forsaking the opportunity to identify and assess potentially more executable alternatives. Programs focus on advocacy at the expense of realism and sound management. Invariably, with too many programs in its portfolio, DOD is forced to continually shift funds to and from programs—particularly as programs experience problems that require additional time and money to address. Such shifts, in turn, have had costly, reverberating effects. Second, DOD has tended to start its space programs too early, that is, before it has the assurance that the capabilities it is pursuing can be achieved within available resources and time constraints. This tendency is caused largely by the funding process, since acquisition programs attract more dollars than efforts concentrating solely on proving technologies. Nevertheless, when DOD chooses to extend technology invention into acquisition, programs experience technical problems that require large amounts of time and money to fix. Moreover, when this approach is followed, cost estimators are not well positioned to develop accurate cost estimates because there are too many unknowns. Put more simply, there is no way to accurately estimate how long it would take to design, develop, and build a satellite system when critical technologies planned for that system are still in relatively early stages of discovery and invention. While our work has consistently found that maturing technologies before program start is a critical enabler of success, it is important to keep in mind that this is not the only solution. Both the TSAT and the Space Radar development efforts, for example, were seeking to mature critical technologies before program start, but they faced other risks related to the systems’ complexity, affordability, and other development challenges. Ultimately, Space Radar was cancelled and DOD has proposed the cancellation of TSAT. Last year, we cited the MUOS program’s attempts to mature critical technologies before program start as a best practice, but the program has since encountered technical problems related to design issues and test anomalies. Third, programs have historically attempted to satisfy all requirements in a single step, regardless of the design challenge or the maturity of the technologies necessary to achieve the full capability. DOD has preferred to make fewer but heavier, larger, and more complex satellites that perform a multitude of missions rather than larger constellations of smaller, less complex satellites that gradually increase in sophistication. This has stretched technology challenges beyond current capabilities in some cases and vastly increased the complexities related to software. Programs also seek to maximize capability because it is expensive to launch satellites. A launch using a medium- or intermediate-lift evolved expendable launch vehicle, for example, would cost roughly $65 million. Fourth, several of today’s high-risk space programs began in the late 1990s, when DOD structured contracts in a way that reduced government oversight and shifted key decision-making responsibility onto contractors. This approach—known as Total System Performance Responsibility, or TSPR—was intended to facilitate acquisition reform and enable DOD to streamline its acquisition process and leverage innovation and management expertise from the private sector. Specifically, TSPR gave a contractor total responsibility for the integration of an entire weapon system and for meeting DOD’s requirements. However, because this reform made the contractor responsible for day-to-day program management, DOD did not require formal deliverable documents—such as earned value management reports—to assess the status and performance of the contractor. The resulting erosion of DOD’s capability to lead and manage the space acquisition process magnified problems related to requirements creep and poor contractor performance. Further, the reduction in government oversight and involvement led to major reductions in various government capabilities, including cost-estimating and systems-engineering staff. The loss of cost-estimating and systems- engineering staff in turn led to a lack of technical data needed to develop sound cost estimates. We have not performed a comprehensive review of the space industrial base, but our prior work has identified a number of pressures associated with contractors that develop space systems for the government that have hampered the acquisition process. Many of these have been echoed in other studies conducted by DOD and congressionally chartered commissions. We and others have reported that industry—including both prime contractors and subcontractors—has been consolidated to a point where there may be only one company that can develop a needed capability or a specific component for a satellite system. In the view of DOD and industry officials we have interviewed, this condition has enabled contractors to hold some programs hostage and has made it difficult to inject competition into space programs. We also have identified cases where space programs experienced unanticipated problems resulting from consolidations in the supplier base. For example, contractors took cost- cutting measures that reduced the quality of parts. In the case of GPS IIF, contractors lost key technical personnel as they consolidated development and manufacturing facilities, causing inefficiencies in the program. In addition, space contractors are facing workforce pressures similar to those experienced by the government, that is, there is not enough technical expertise to develop highly complex space systems. A number of studies have found that both industry and the U.S. government face substantial shortages of scientists and engineers and that recruitment of new personnel is difficult because the space industry is one of many sectors competing for the limited number of trained scientists and engineers. Security clearance requirements make competing for talented personnel even more difficult for military and intelligence space programs as opposed to civil space programs. In a 2006 review of space cost estimating, we also found that the government has made erroneous assumptions about the space industrial base when it started the programs that are experiencing the most challenges today. In a review for this subcommittee, for instance, we found that the original contracting concept for the EELV program was for the Air Force to piggyback on the anticipated launch demand of the commercial sector. Furthermore, the Air Force assumed that it would benefit financially from competition among commercial vendors. However, the commercial demand never materialized, and the government decided to bear the cost burden of maintaining the industrial base in order to maintain launch capability, and assumed savings from competition were never realized. Over the past decade, we have identified best practices that DOD space programs can benefit from. DOD has taken a number of actions to address the problems on which we have reported. These include initiatives at the department level that will affect its major weapons programs, as well as changes in course within specific Air Force programs. Although these actions are a step in the right direction, additional leadership and support are still needed to ensure that reforms that DOD has begun will take hold. Our work—which is largely based on best practices in the commercial sector—has recommended numerous actions that can be taken to address the problems we identified. Generally, we have recommended that DOD separate technology discovery from acquisition, follow an incremental path toward meeting user needs, match resources and requirements at program start, and use quantifiable data and demonstrable knowledge to make decisions to move to next phases. We have also identified practices related to cost estimating, program manager tenure, quality assurance, technology transition, and an array of other aspects of acquisition program management that space programs could benefit from. Table 1 highlights these practices. Several of these practices could also benefit the space industrial base. For instance, applying an evolutionary approach to development would likely provide a steadier pipeline of government orders and thus enable suppliers to maintain their expertise and production lines. More realistic cost estimating and full funding would reduce funding instability, which could reduce fits and starts that create planning difficulties for suppliers. Longer tenure and more authority for program managers would provide more continuity in relationships between the government and its suppliers. DOD is attempting to implement some of these practices for its major weapon programs. For example, as part of its strategy for enhancing the roles of program managers in major weapon system acquisitions, the department has established a policy that requires formal agreements among program managers, their acquisition executives, and the user community that set forth common program goals. These agreements are intended to be binding and to detail the progress a program is expected to make during the year and the resources the program will be provided to reach these goals. DOD is also requiring program managers to sign tenure agreements so that their tenure will correspond to the next major milestone review closest to 4 years. Over the past few years, DOD has also been testing portfolio management approaches in selected capability areas—command and control, net-centric operations, battlespace awareness, and logistics—to facilitate more strategic choices for resource allocation across programs. Within the space community, cost estimators from industry and agencies involved in space have been working together to improve the accuracy and quality of their estimates. In addition, on specific programs, actions have been taken to prevent mistakes made in the past. For example, on the GPS IIIA program, the Air Force is using an incremental development approach, where it will gradually meet the needs of its users; using military standards for satellite quality; conducting multiple design reviews; exercising more government oversight and interaction with the contractor and spending more time at the contractor’s site; and using an improved risk management process. On the SBIRS program, the Air Force acted to strengthen relationships between the government and the SBIRS contractor team, and to implement more effective software development practices as it sought to address problems related to the systems flight software system. Correspondingly, DOD’s Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics is asking space programs passing through milestone reviews to take specific measures to better hold contractors accountable through award and incentive fees, to require independent technology readiness assessments at particular points in the acquisition process, and to hold requirements stable. Furthermore, the Air Force, U.S. Strategic Command, and other key organizations have made progress in implementing the Operationally Responsive Space (ORS) initiative. This initiative encompasses several separate endeavors with a goal to provide short-term tactical capabilities as well as identifying and implementing long-term technology and design solutions to reduce the cost and time of developing and delivering simpler satellites in greater numbers. ORS provides DOD with an opportunity to work outside the typical acquisition channels to more quickly and less expensively deliver these capabilities. In 2008, we found that DOD has made progress in putting a program management structure in place for ORS as well as executing ORS-related research and development efforts, which include development of low-cost small satellites, common design techniques, and common interfaces. Legislation introduced in recent years has also focused on improving space and weapon acquisitions. In February, the Senate Committee on Armed Services introduced an acquisition reform bill which contains provisions that could significantly improve DOD’s management of space programs. For instance, the bill focuses on increasing emphasis on systems engineering and developmental testing, instituting earlier preliminary design reviews, and strengthening independent cost estimates and technology readiness assessments. Taken together, these measures could instill more discipline in the front end of the acquisition process when it is critical for programs to gain knowledge. The bill also requires greater involvement by the combatant commands in determining requirements and requiring greater consultation among the requirements, budget, and acquisition processes. In addition, several of the bill’s sections, as currently drafted, would require in law what DOD policy already calls for, but it is not being implemented consistently in weapon programs. Last week, the House Committee on Armed Services announced it would be introducing a bill to similarly reform DOD’s system for acquiring weapons by providing for, among other things, oversight early in product development and for appointment of independent officials to review acquisition programs. However, we did not have time to assess the bill for this statement. The actions that the Air Force and Office of the Secretary of Defense have been taking to address acquisition problems are good steps. But, there are still more, significant changes to processes, policies, and support needed to ensure reforms can take hold. In particular, several studies have recently concluded that there is a need to strengthen leadership for military and intelligence space efforts. The Allard Commission reported that responsibilities for military space and intelligence programs are scattered across the staffs of the DOD and the Intelligence Community and that it appears that “no one is in charge” of national security space. The HPSCI expressed similar concerns in its report, focusing specifically on difficulties in bringing together decisions that would involve both the Director of National Intelligence and the Secretary of Defense. Prior studies, including those conducted by the Defense Science Board and the Commission to Assess United States National Security Space Management and Organization (Space Commission) have identified similar problems, both for space as a whole and for specific programs. While these studies have made recommendations for strengthening leadership for space acquisitions, no major changes to the leadership structure have been made in recent years. In fact, an “executive agent” position within the Air Force that was designated in 2001 in response to a Space Commission recommendation to provide leadership has not been filled since the last executive resigned in 2007. In addition, more actions may be needed to address shortages of personnel in program offices for major space programs. We recently reported that personnel shortages at the EELV program office have occurred particularly in highly specialized areas, such as avionics and launch vehicle groups. Program officials stated that 7 of 12 positions in the engineering branch for the Atlas group were vacant. These engineers work on issues such as reviewing components responsible for navigation and control of the rocket. Moreover, only half the government jobs in some key areas were projected to be filled. These and other shortages in the EELV program office heightened concerns about DOD’s ability to use a cost-reimbursement contract acquisition strategy for EELV since that strategy required greater government attention to the contractor’s technical, cost, and schedule performance information. In previous reviews, we cited personnel shortages at program offices for TSAT as well as for cost estimators across space. While increased reliance on contractor employees has helped to address workforce shortages, it could ultimately create gaps in areas of expertise that could limit the government’s ability to conduct oversight. Further, while actions are being undertaken to make more realistic cost estimates, programs are still producing schedule estimates that are optimistic and promising that they will not miss their schedule goals. The GPS IIIA program, for example, began 9 months later than originally anticipated because of funding delays, but the delivery date remained the same. The schedule is 3 years shorter than the one achieved so far on GPS IIF. We recognize that the GPS IIIA program has built a more solid foundation for success than the IIF, which offers the best course to deliver on time, but setting an ambitious schedule goal should not be the Air Force’s only measure for mitigating potential capability gaps. Last year, we also reported that the SBIRS program’s revised schedule estimates for addressing software problems appeared too optimistic. For example, software experts, independent reviewers, as well as the government officials we interviewed agreed that the schedule was aggressive, and the Defense Contract Management Agency has repeatedly highlighted the schedule as high risk. In conclusion, senior leaders managing DOD’s space portfolio are working in a challenging environment. There are pressures to deliver new, transformational capabilities, but problematic older satellite programs continue to cost more than expected, constrain investment dollars, pose risks of capability caps, and thus require more time and attention from senior leaders than well-performing efforts. Moreover, military space is at a critical juncture. While there are concerns about the United States losing its competitive edge in the development of space technology, there are critical capabilities that are at risk of falling behind their current level of service. To best mitigate these circumstances and put future programs on a better path, DOD needs to focus foremost on sustaining current capabilities and preparing for potential gaps. In addition, there is still a looming question of how military and intelligence space activities should be organized and led. From an acquisition perspective, what is important is that the right decisions are made on individual programs, the right capability is in place to manage them, and there is someone to hold accountable when programs go off track. Madam Chairman, this concludes my prepared statement. I would be happy to answer any questions you or members of the subcommittee may have at this time. For further information about this statement, please contact Cristina Chaplain at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Pubic Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Art Gallegos, Assistant Director; Greg Campbell; Maria Durant; Arturo Holguin; Laura Holliday; Rich Horiuchi; Sylvia Schatz; and Peter Zwanzig. In preparing this testimony, we relied on our body of work in space programs, including previously issued GAO reports on assessments of individual space programs, common problems affecting space system acquisitions, and the Department of Defense’s (DOD) acquisition policies. We relied on our best practices studies, which comment on the persistent problems affecting space acquisitions, the actions DOD has been taking to address these problems, and what remains to be done. We also relied on work performed in support of our 2009 annual weapons system assessment. The individual reviews were conducted in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Despite a growing investment in space, the majority of large-scale acquisition programs in the Department of Defense's (DOD) space portfolio have experienced problems during the past two decades that have driven up cost and schedules and increased technical risks. The cost resulting from acquisition problems along with the ambitious nature of space programs have resulted in cancellations of programs that were expected to require investments of tens of billions of dollars. Along with the cost increases, many programs are experiencing significant schedule delays--as much as 7 years--resulting in potential capability gaps in areas such as positioning, navigation, and timing; missile warning; and weather monitoring. This testimony focuses on (3) the condition of space acquisitions, (1) causal factors, (2) observations on the space industrial base, and (4) recommendations for better positioning programs and industry for success. In preparing this testimony, GAO relied on its body of work in space and other programs, including previously issued GAO reports on assessments of individual space programs, common problems affecting space system acquisitions, and DOD's acquisition policies. Estimated costs for major space acquisition programs have increased by about $10.9 billion from initial estimates for fiscal years 2008 through 2013. As seen in the figure below, in several cases, DOD has had to cut back on quantity and capability in the face of escalating costs. Several causes behind the cost growth and related problems consistently stand out. First, DOD starts more weapon programs than it can afford, creating a competition for funding that encourages, among other things, low cost estimating and optimistic scheduling. Second, DOD has tended to start its space programs before it has the assurance that the capabilities it is pursuing can be achieved within available resources. GAO and others have identified a number of pressures associated with the contractors that develop space systems for the government that have hampered the acquisition process, including ambitious requirements, the impact of industry consolidation, and shortages of technical expertise in the workforce. Although DOD has taken a number of actions to address the problems on which GAO has reported, additional leadership and support are still needed to ensure that reforms that DOD has begun will take hold.
Treasury’s Office of Homeownership Preservation within the Office of Financial Stability (OFS), which administers TARP, addresses the issues of preventing avoidable foreclosures and preserving homeownership. Treasury established three initiatives funded under TARP to address these issues: MHA, the Hardest Hit Fund, and, in conjunction with the Department of Housing and Urban Development’s (HUD) Federal Housing Administration (FHA), the FHA Refinance of Borrowers in Negative Equity Positions (FHA Short Refinance). Treasury allocated $29.9 billion in TARP funds to MHA to be used to encourage the modification of eligible mortgages that financial institutions owned and held in their portfolios (whole loans) or that they serviced for private-label securitization trusts, as well as to provide other relief to Only financial institutions that voluntarily signed a distressed borrowers. Commitment to Purchase Financial Instrument and Servicer Participation Agreement with respect to loans not owned or guaranteed by the government-sponsored enterprises Fannie Mae or Freddie Mac (the enterprises) on or before October 3, 2010, are eligible to receive TARP financial incentives under the MHA program. MHA was initially set to end December 31, 2012, but Treasury recently extended the MHA application deadline by 1 year to December 31, 2013. In addition to the original HAMP first-lien modifications, MHA TARP-funded efforts include the Principal Reduction Alternative (PRA), the Second Lien Modification Program (2MP), the Home Affordable Unemployment Program, the Home Affordable Foreclosure Alternatives program, Home Price Decline Protection incentives, and several other incentive programs. The largest component of MHA is the HAMP first-lien modification program, which was intended to help eligible homeowners stay in their homes and avoid potential foreclosure. HAMP first-lien modifications are available to qualified borrowers who took out their loans on or before January 1, 2009. Only single-family properties (one to four units) with mortgages no greater than $729,750 for a one-unit property are eligible. HAMP uses a standardized net present value (NPV) model to compare expected cash flows from a modified loan to the same loan with no modification, using certain assumptions. If the NPV of the expected investor cash flow with a modification is greater than the NPV of the expected cash flow without a modification, the loan servicer is required to modify the loan. In addition, Treasury shares some of the costs of modifying mortgages with mortgage holders/investors and provides incentives of up to $1,600 to servicers for completing modifications. In early 2012, Treasury announced a second evaluation for a modification under HAMP, at which point the original HAMP first-lien modification structure was redesignated as HAMP Tier 1, and the new evaluation was named HAMP Tier 2. HAMP Tier 2 became available to borrowers June 1, 2012. Generally, HAMP Tier 1 is available to qualified borrowers who occupy their properties as their primary residences and whose first-lien mortgage payment is more than 31 percent of their monthly gross income, calculated using the front-end debt-to-income (DTI) ratio. In contrast, HAMP Tier 2 is available for either owner-occupied properties or rental properties, and borrowers’ monthly mortgage payments prior to modification do not have to exceed a specified threshold. Mortgages secured by owner-occupied properties must be in imminent default or be two or more payments delinquent to be considered for HAMP Tier 1 or HAMP Tier 2. For mortgages secured by rental properties, only those that are two or more payments delinquent are eligible. The HAMP Tier 1 standard modification waterfall provides servicers with a sequential modification process to reduce mortgage payments to as close to 31 percent of gross monthly income as possible. Servicers must first capitalize accrued interest and certain expenses paid to third parties and add this amount to the loan balance (principal) amount. Next, the interest rate must be reduced in increments of one-eighth of 1 percent until the 31-percent DTI target is reached, but servicers are not required to reduce interest rates below 2 percent. If the interest rate reduction does not result in a DTI ratio of 31 percent, servicers must then extend the maturity and/or amortization period of the loan in 1-month increments up to 40 years. Finally, if the target DTI ratio is still not reached, the servicer must forbear, or defer, principal until the payment is reduced to the 31-percent target. Servicers may also forgive mortgage principal at any step of the process to achieve the target monthly payment ratio of 31 percent, provided that the investor allows principal reduction. In contrast, the HAMP Tier 2 modification provides servicers with a uniform set of actions that must result in a reduction in the principal and interest payments of at least 10 percent and a postmodification DTI that is greater than or equal to 25 percent but less than or equal to 42 percent in order for the modification to proceed. The NPV model applies the following steps, using information provided by the servicer to evaluate borrowers for HAMP Tier 2: accrued interest and certain expenses paid to third parties are capitalized (added to the principal amount); the interest rate is adjusted to the weekly Freddie Mac Primary Mortgage Market Survey Rate, rounded up to the nearest 0.125 percent, plus a risk adjustment established by Treasury (initially 50 basis points); the mortgage term is extended to 480 months and reamortized; and, if the premodification current loan-to-value (LTV) ratio is greater than 115 percent, principal forbearance is applied in the amount of the lesser of 30 percent of the unpaid principal balance (including capitalized amounts) or the amount required to create a postmodification LTV ratio of 115 percent. Borrowers must also demonstrate their ability to pay the modified amount by successfully completing a trial period of at least 3 months (or longer if necessary) before a loan is permanently modified and any government payments are made under both HAMP Tier 1 and HAMP Tier 2. According to Treasury data, about 880,000 trial modifications had been started under the TARP-supported (nonenterprise) portion of HAMP Tier 1 through April 2012. Of these, approximately 493,000 were converted to permanent modifications, 347,000 had been canceled, and 40,000 remained in active trial periods. Of the HAMP Tier 1 permanent modifications started, approximately 384,000 remained active, and 109,000 had been canceled. Treasury has entered into agreements to have Fannie Mae and Freddie Mac act as its financial agents for MHA. Fannie Mae serves as the MHA program administrator and is responsible for developing and administering program operations, including registering servicers and executing participation agreements with and collecting data from them, as well as providing ongoing servicer training and support. Within Freddie Mac, the MHA-Compliance team is the MHA compliance agent and is responsible for assessing servicers’ compliance with nonenterprise program guidelines, including conducting onsite and remote servicer loan file reviews and audits. In October 2010, PRA took effect as a component of HAMP to give servicers more flexibility in offering relief to borrowers whose homes were worth significantly less than their mortgage balance. Under PRA, Treasury provides mortgage holders/investors with incentive payments in the form of a percentage of each dollar of principal reduction. Treasury tripled the PRA incentive amounts offered to mortgage holders/investors for permanent modifications that had trial period effective dates on or after March 1, 2012. At their own discretion, servicers may also offer modifications under PRA to borrowers with LTV ratios that are less than 115 percent. However, PRA incentives are provided only for the portion of the principal reduction that brings the LTV no lower than 105 percent. No PRA incentives are provided for the portion of the principal reduction that reduces the LTV below 105 percent. Treasury had paid about $42 million in PRA incentives to participating mortgage holders/investors. According to Treasury, 2MP is designed to work in tandem with HAMP modifications to provide a comprehensive solution to help borrowers afford their total mortgage payments. A participating servicer of a second lien on a property with a first lien that receives a HAMP modification must offer to modify the borrower’s second lien, accept a lump sum payment from Treasury to fully extinguish it, or accept a lump sum payment from Treasury to partially extinguish it and modify the remaining portion. Under 2MP, servicers are required to take modification actions in the following order: capitalize accrued interest and other past due amounts; reduce the interest rate to as low as 1 percent for 5 years (when the interest rate will reset at the rate of the HAMP-modified first lien); extend the term to at least match the HAMP-modified first lien; and forbear or forgive principal in at least the same proportion as the forbearance or forgiveness on the HAMP-modified first lien, although servicers may choose to forbear or forgive more than that amount. According to Treasury, nearly 60,000 2MP modifications were active as of April 2012, in addition to more than 17,000 second liens that were fully extinguished. As it does with PRA, Treasury provides incentive payments to the second-lien mortgage holders in the form of a percentage of each dollar of principal reduction on the second lien. Treasury doubled the incentive payments offered to second-lien mortgage holders for 2MP permanent modifications that included principal reduction and had an effective date on or after June 1, 2012. Treasury established the Hardest Hit Fund program in February 2010, 1 year after announcing MHA. The goal of the program was to fund innovative measures developed by state HFAs and approved by Treasury to help borrowers in states hit hardest by the aftermath of the housing bubble. The Hardest Hit Fund program was originally announced as a $1.5 billion effort to reach borrowers in five states. Treasury subsequently provided three additional rounds of funding to bring the total allocation to $7.6 billion across 18 states and the District of Columbia. The 18 states are Alabama, Arizona, California, Florida, Georgia, Illinois, Indiana, Kentucky, Michigan, Mississippi, Nevada, New Jersey, North Carolina, Ohio, Oregon, Rhode Island, South Carolina, and Tennessee. In a recent report examining the implementation of the Hardest Hit Fund program, SIGTARP found that Treasury consistently applied its criteria in selecting states to participate but that the selections in the second round of funding were not transparent. Treasury designed the recently announced changes to its MHA programs to address barriers to participation it identified in the existing programs, but the changes may have a limited impact on increasing MHA participation rates. Because most of the changes became effective on June 1, 2012, we could not determine the extent to which they would in fact increase MHA participation rates. The servicers that we queried had mixed views on the likely effectiveness of these changes on increasing MHA participation. Also, Treasury reported that several servicers were not able to fully implement the HAMP Tier 2 changes by the effective date, including two large servicers that Treasury indicated would need several additional months to fully implement them. Additionally, we found that Treasury had not fully assessed or estimated the number of borrowers who would receive assistance as a result of these changes or the costs that would be incurred. Lastly, Treasury has not completed program-specific risk assessments to mitigate potential risks or developed performance measures to hold itself and servicers accountable for the MHA changes. Treasury officials told us that the recent changes to MHA—expanding HAMP eligibility, extending the program deadline for all MHA programs, and increasing incentives for PRA and 2MP—were designed to address several issues identified in Treasury’s analyses of the existing MHA programs. However, the likely effect of these changes on participation is not yet known and could be limited, according to servicers that we contacted.modifications have generally been in decline since early 2010 and in April The numbers of newly started trial and permanent 2012 reached their lowest levels since the HAMP first-lien program began (see fig. 1). One factor contributing to the initial decline was that as of June 1, 2010, Treasury required servicers to verify borrowers’ income before offering them a trial modification. In addition, according to Treasury officials, the pool of borrowers potentially eligible for HAMP has been shrinking, falling from an estimated 1.4 million in December 2010 to less than 900,000 12 months later. Treasury officials said that the changes in eligibility were made on the basis of an analysis of delinquent loans held by borrowers who had not been assisted by HAMP and might not receive assistance through non- MHA programs. Specifically, Treasury found that the 31-percent DTI threshold for HAMP Tier 1 was excluding a significant number of borrowers who could have experienced financial hardships. Other borrowers were being excluded because the modification steps required to bring their DTI down to 31 percent resulted in excessive forbearance or made the NPV result negative. These factors contributed to Treasury’s adopting the flexible postmodification DTI under HAMP Tier 2. In addition, Treasury found that tenants were being displaced because the property owners could not obtain loan modifications for properties that were not the owners’ primary residence. The large number of non-owner-occupied properties with delinquent mortgages was another factor in Treasury’s decision to allow modifications on certain rental properties. Treasury officials told us that other borrowers could not be assisted under HAMP for a variety of reasons—for example, because their servicers did not participate in the HAMP program or their loans fell within the jurisdiction of FHA or Department of Veterans Affairs (VA) loan assistance programs. Treasury decided to keep the maximum loan limit and the origination date cutoff because these exclusions did not affect the target population of borrowers Treasury was trying to reach. Treasury officials said that their analysis suggested that increasing incentives for PRA and 2MP could also increase investor participation in these programs. The officials told us that they thought the rate of participation in PRA should be higher and that they wanted to encourage principal reduction for deeply underwater borrowers with a hardship because reducing principal would make for a more sustainable modification. Our analysis of Treasury’s HAMP data indicates that after PRA went into effect in October 2010, about 32 percent of nonenterprise trial modifications included principal reduction under the program as of April 2012. On a cumulative basis, the proportion of HAMP permanent modifications that include principal reduction under PRA has increased from less than 1 percent in May 2011 to nearly 6 percent in April 2012 (see fig. 2). Officials told us that PRA participation had also resulted in additional 2MP participation because servicers must make a corresponding principal reduction on any second-lien mortgage when the corresponding first-lien mortgage is reduced. Treasury officials also told us that they had found that increasing investor incentive levels would change a number of NPV evaluation results from negative to positive. Further, by increasing incentives officials hope to encourage greater participation among investors that already participate in PRA and those that do not but might be encouraged to participate. Treasury officials said that their discussions with servicers and investor groups indicated that the previous incentive levels were not high enough to entice all investors to participate in PRA. The expansion of HAMP eligibility to include HAMP Tier 2 also means that additional second-lien mortgages would be eligible for modification under 2MP. By increasing 2MP incentives, officials stated that Treasury intended to encourage continued participation going forward for these loans and to give servicers an incentive to increase write-downs, including full extinguishments on second-lien mortgages. Continued fragility in the housing market prompted Treasury to extend the MHA program application deadline another year. While there has been some improvement in the housing market, house prices remain near postbubble lows. In addition, default levels, which are associated with high unemployment and underemployment, have declined from their peak levels but remain high by historical standards. Further, Treasury projected that total spending for existing HAMP Tier 1 modifications and other MHA interventions would be approximately $9 billion of the $29.9 billion allocated by the time the program ended in December 2017. Treasury officials noted that this amount would increase as additional modifications were completed. Treasury has not identified the number of modifications that may be made under HAMP Tier 2 or the potential costs of the changes to MHA. According to Treasury officials, a number of external factors that could have an impact on these calculations remain uncertain, including the implementation of the national mortgage settlement involving the federal government, state attorneys general, and the five largest servicers; the participation of Fannie Mae and Freddie Mac in some of the recent MHA program changes; and the ability of the participating servicers to implement HAMP Tier 2 changes. Before the final program guidance was issued, Treasury’s preliminary estimate was that the changes could result in an additional 1 million borrowers potentially becoming eligible for MHA programs. Treasury has not provided a revised estimate that reflects the final changes, although Treasury officials stated that it would be lower due to the impact of narrowing the DTI range from what had initially been considered and other factors. When we asked five servicers how they thought the changes might affect their loan modification volumes, their responses varied. One servicer anticipated a 15- to 18-percent increase in HAMP modifications because of the expanded DTI range, and another servicer stated that 50 percent of the borrowers it had been unable to help under HAMP Tier 1 had not met the 31-percent DTI restriction, so the changes could potentially increase its HAMP modifications. However, some servicers also indicated that HAMP Tier 2 might not reach many additional borrowers because the HAMP modifications would likely offset proprietary modifications that would have otherwise been made to those borrowers’ loans. Of the two servicers that expected the number of their modifications on rental properties to increase, one servicer stated that it had a large population of delinquent loans on rental properties but did know how many would meet the other eligibility requirements for a HAMP modification. The other servicer expected the changes would increase its HAMP modification volume but had not projected the magnitude. Another servicer said that it did not have enough information to project the number of loans it might make under HAMP Tier 2. One servicer stated that increased PRA incentives should increase HAMP participation, and several also mentioned that the national mortgage settlement would have an impact, because part of the settlement required servicers to provide principal reduction. However, two of the servicers we contacted did not anticipate any increase in their HAMP participation levels from the increased incentives. One servicer indicated that its portfolio loans would not be affected by these new investor incentives but that more of the loans it serviced for other mortgage holders/investors might be modified. Specifically, about 15 percent of its mortgage holders/investors had opted out of PRA but had told this servicer that they might be willing to reconsider in response to the increased incentives, especially for loans that would qualify for the highest incentive on the principal reduction (LTVs greater than or equal to 105 percent but less than 115 percent). Given the currently low participation rates and the reasons for them, as well as the mixed expectations of the servicers we interviewed, it is not yet possible to determine whether the changes will significantly increase the number of troubled borrowers assisted under MHA. Nevertheless, Treasury’s steps may further support the still-fragile housing market and help reduce the number of potential foreclosures. Treasury has taken several steps to help servicers meet the program requirements for the recent changes to MHA programs, but challenges could affect some servicers’ capacity to effectively implement the new program changes beyond the June 1, 2012, effective date. Treasury officials stated that they had modeled the HAMP Tier 2 program after the enterprises’ existing standard modification, believing that servicers would be better able to implement a new modification that was similar to a type of modification they already offered. Several servicers told us that Treasury had provided an early draft of the proposed HAMP Tier 2 changes for their review, and Treasury officials told us that they had consulted with servicers to establish effective dates for some changes. In addition, the officials told us that as part of the process of implementing HAMP Tier 2, Treasury’s program administrator, Fannie Mae, had relied on existing servicer integration teams obtain implementation plans from the largest servicers, facilitate responses to servicers’ policy questions, and conduct onsite meeting with the largest servicers to address operational and reporting question. Treasury officials also stated that they responded to servicers’ questions on a weekly basis and had met with several of the largest servicers to discuss their implementation plans. In spite of Treasury’s efforts to help ensure that servicers had the capacity to implement the recent changes and to facilitate implementation of the changes, some servicers did not have the necessary resources or infrastructure to effectively implement all the new program requirements at the announced start date of the program. While the similarities between the HAMP Tier 2 changes and the enterprises’ standard modification should ease implementation in areas such as staff training, some large servicers told us that there were some significant differences between HAMP Tier 2 and the enterprises’ standard modification programs, such as certain eligibility requirements and the use of an NPV model. Several servicers we spoke with thought that they might not be able to meet the effective date for the changes, and subsequently Treasury reported that ten servicers were unable to fully implement the changes by the effective date, including two large servicers that were not expected to fully implement them for several more months (mid-October 2012 for one large servicer). However, 17 of the 18 largest servicers were able to implement some aspects of HAMP Tier 2 as of the effective date, and 14 of the 18 had fully implemented HAMP Tier 2 by June 30, according to Treasury. To help ensure that the delays would not impact borrowers, Treasury imposed additional requirements on all servicers that did not fully implement HAMP Tier 2 by the June 1 effective date. These servicers must develop a process to identify borrowers who are potentially eligible for HAMP Tier 2; halt foreclosure referrals and foreclosure sales for those borrowers; and ensure that each borrower has a single point of contact. Additionally, servicers that are unable to fully implement HAMP Tier 2 by mid-July will be required to evaluate and offer eligible borrowers proprietary modifications similar to HAMP Tier 2 and either automatically convert those borrowers to or reevaluate them for HAMP Tier 2 modifications when the changes are fully implemented. Treasury will conduct compliance reviews to help ensure that all servicers appropriately implement HAMP Tier 2 and adhere to the applicable interim requirements. Previously, Treasury officials had acknowledged that servicers might face some challenges, as they did when they implemented the enterprises’ standard modification. For example, according to the officials the larger servicers do not process proprietary loan modifications and modifications for the enterprises in the same geographic location. Servicers may also use different servicing platforms at each location, so that processing and personnel can be completely separate. Other federal housing officials also noted that the enterprises’ standard modification was more streamlined than the HAMP Tier 2 modification, in that it did not require an NPV test and allowed a broader DTI range. Treasury officials also acknowledged several other major operational issues that could affect implementation of the HAMP Tier 2 changes. For example, the five largest servicers need to implement operational changes in response to the recent mortgage settlement with the federal government and state attorneys general. Fourteen servicers must comply with consent orders issued by federal banking regulators in April 2011, and others have been involved in mergers or acquisitions. Treasury officials told us that they had identified certain risks associated with the recent changes based on internal analyses and discussions with stakeholders, but Treasury has not conducted a comprehensive risk assessment. Treasury officials said that they had incorporated ways to mitigate risks as part of their deliberations when designing the program changes and provided us with a summary document showing examples of actions they had taken to mitigate certain risks and challenges. For example, Treasury officials stated that they had lowered the allowable DTI ceiling for HAMP Tier 2 modifications to 42 percent (below the allowable DTI ceiling of 55 percent for the enterprises’ standard modification) to mitigate redefault risks after discussing the proposed changes with servicers, investors, and federal banking regulators. In addition, Treasury raised the allowable DTI floor to 25 percent (above the allowable DTI floor of 10 percent for the enterprises’ standard modification) to help ensure that borrowers who received HAMP Tier 2 modifications were really in need of assistance. Further, Treasury noted that it had taken several steps to mitigate the risk that servicers would not be able to implement HAMP Tier 2 in a timely or effective manner due to lack of capacity—efforts that we discussed earlier in the report. However, based on our review of available documentation and discussions with Treasury officials, Treasury did not appear to have performed key components of a risk assessment that are outlined in standards for internal control in the federal government prior to implementing HAMP Tier 2. Although Treasury took the first step of identifying risks, it did not analyze the significance and likelihood of occurrence of the identified risks. As we previously reported, agencies must identify the risks that could impede the success of new programs and determine appropriate methods of mitigating these risks. In particular, we highlighted the need for Treasury to develop appropriate controls to mitigate risks before the programs’ implementation dates. Our internal control guidance further states that risks should be extensively analyzed whenever agencies begin the production or provision of new outputs or services and that agencies should give special attention to risks that can have more dramatic and pervasive effects. Officials told us that they had nearly completed a systematic risk assessment of the existing MHA programs and that they planned to conduct a formal risk assessment of HAMP Tier 2 once it was up and running and the servicers had been given the time to put their internal controls in place. In the meantime, several potential risks identified in the course of our review remain. Allowing borrowers to receive loan modifications that result in front-end DTIs of up to 42 percent under HAMP Tier 2, rather than the 31-percent target required under HAMP Tier 1, could increase redefault risk. Borrowers with high front-end DTIs may also have higher back-end DTIs (which include mortgage debt from subordinate liens in addition to the first-lien mortgage debt used to calculate the front-end DTI) that could affect their ability to make the modified mortgage payments. Although the back-end DTIs are not restricted under either the HAMP Tier 1 or HAMP Tier 2 program, they may be higher under HAMP Tier 2, potentially posing a greater risk. Permitting borrowers to obtain modifications for rental properties without sufficient controls and enforcement mechanisms could increase both default risk and the risk that the program will be misused for ineligible properties—for example, investment properties that are never rented. In order to receive a modification under HAMP Tier 2 for a rental property, borrowers must self-certify under penalty of perjury that they intend to rent the property if it is or becomes vacant and that they do not own more than five single-family properties (in addition to their principal residence). However, these borrowers may encounter significant delays renting one or more properties for a variety of reasons, such as adverse housing market conditions and poor property condition, or the properties may eventually rent for less than expected. In either case, the borrower’s ability to remain current in either the trial modification or, more importantly, the permanent modification, could be compromised, risking redefault. Further, self-certifications do little to help ensure that borrowers are in compliance with program requirements unless extensive controls are in place to ensure that borrowers are telling the truth. SIGTARP’s April 2012 Quarterly Report to Congress made several recommendations related to the need for Treasury to protect against the possible misuse of HAMP Tier 2 funds to modify loans on vacation homes or investment properties that were never rented. Further, some servicers expressed concern that extending the deadline to December 31, 2013, and opening up HAMP Tier 2 to mortgages on rental properties might jeopardize the safe harbor protection provided under the Helping Families Save Their Homes Act of 2009. The act provides a safe harbor for servicers that modify mortgages and engage in other loss mitigation activities consistent with guidelines issued by Treasury and that satisfy specific requirements, including implementing a loss mitigation plan prior to December 31, 2012. Although Treasury officials stated that the significance of this issue was unclear, two servicers we spoke to noted that it could affect the reach of the program. Treasury officials noted that servicers would face potential liability only if mortgage holders or investors were to take legal action against them. As we reported previously, Treasury must establish specific and relevant performance measures that will enable it to evaluate a program’s success against stated goals in order to hold itself and servicers accountable for these TARP-funded programs. We recommended that Treasury finalize and implement benchmarks for performance measures under the first-lien modification program, as well as develop measures and benchmarks for other TARP-funded homeowner assistance programs. As discussed in appendix II, Treasury has estimated the expected funding levels for the MHA component programs (except for HAMP Tier 2) and established performance measures to assess servicer compliance and implementation of MHA programs. But it has not fully developed specific and quantifiable outcome measures or benchmarks to determine the success of these programs, including goals for the number of homeowners these programs are expected to help. Similarly, Treasury has not identified outcome measures that will be used to evaluate the overall success of HAMP Tier 2 in achieving the goals of preventing foreclosures and preserving homeownership. The measures of servicer performance used in the quarterly servicer assessments are valuable indicators for monitoring how MHA programs are being implemented, but they do not provide a way to assess the extent to which each program is achieving the objectives spelled out in EESA. Treasury officials said that they would assess redefault rates for different MHA programs. Treasury officials believe that HAMP redefault rates compare favorably with the rates of other types of modifications, but Treasury has not yet established redefault rate benchmarks or goals. Also, Treasury has noted that it may not be possible to gauge the unique contribution of any one program among the array of activities aimed at supporting housing markets and homeowners. Treasury officials told us that they wanted to avoid creating unrealistic expectations when setting goals for participation, given that external factors that affect participation are difficult to predict. Instead, Treasury officials said that they were focusing their efforts on working closely with servicers to encourage them to reach out to homeowners and on encouraging homeowners to get help. Treasury has established performance measures to assess servicers’ compliance with MHA program requirements and their performance that are published in quarterly servicer assessments. The compliance measures include quantitative measures with explicit benchmarks, such as the percentage of servicers’ eligibility determinations and borrower income calculations that are accurate. However, the servicer performance measures, which include the servicer’s rate of converting trials to permanent modifications, the number of trials lasting 6 months or longer, response time to resolve inquiries that have been escalated to the HAMP Solution Center, and the percentage of missing modification status reports, do not have such benchmarks or goals. Instead, these measures look at relative performance by comparing a servicer’s current performance to either its past performance or to the best and worst performance among the 9 largest MHA servicers. After a slow start, states have increased their spending on borrower assistance under the Hardest Hit Fund in recent months, but it is not clear that all the states will meet their spending and borrower assistance goals. Nonetheless, most of the state officials we spoke to said that they anticipated being able to spend their full allocation. State officials told us that, with some help from Treasury, they had confronted challenges related to staffing and infrastructure, servicer participation, borrower outreach, and program implementation. In particular, they noted that Treasury’s efforts to facilitate communication among the states and with servicers through regular conference calls and two national summits had been key to addressing a variety of challenges through the sharing of best practices and solving problems together. These officials told us that Treasury continued to work with them to address some of the remaining barriers. In addition to assisting states in implementing their programs, Treasury oversees the states’ activities, including reviewing and approving all proposed changes to program eligibility requirements and funding allocations. In addition, Treasury’s Hardest Hit Fund program staff and compliance teams conduct oversight and monitoring of states’ Hardest Hit Fund activity monthly, quarterly, and annually. However, Treasury has not required states to report data on administrative expenses in a consistent format and does not report any data on these expenses publicly. Treasury also has not consolidated states’ performance and financial data, including administrative expenses, into a single public report. Treasury made the full Hardest Hit Fund allocations available to HFAs in 18 states and the District of Columbia in September 2010. However, of the $7.6 billion allocated, states had provided combined assistance of $359 million (5 percent) as of March 2012. More than two-thirds of the amount spent ($246 million) was disbursed during the fourth quarter of 2011 and first quarter of 2012, representing a substantial increase relative to previous quarters. The states also reported that they had provided assistance to 43,580 borrowers as of March 31, 2012, more than half of whom were approved during the most recent two quarters. The states varied widely in the proportion of the funds they had disbursed, from less than 1 percent of their total allocation to more than 20 percent (see fig. 3). The two states with the largest Hardest Hit Fund allocations—California and Florida—had spent about 3 percent of their allocated funds (less than $80 million out of more than $3 billion) as of March 31, 2012. Despite the recent increases in disbursements, Treasury estimated that most states would need to further increase the rate of spending in order to fully spend their allocation and reach their borrower assistance goals by the time the program terminated on December 31, 2017. Using the first quarter 2012 disbursement rates, Treasury’s analysis showed that 14 of the 19 HFAs would not meet their disbursement targets by the time the program ended. In addition, although the states had estimated that they would assist more than 450,000 borrowers by the end of the program, Treasury’s projections indicated that, using the monthly rate of borrowers approved during the first quarter of 2012, the states would assist fewer than 350,000. Nonetheless, officials in four of the states we spoke to said that they anticipated being able to spend their full allocation as they continued to ramp up their programs. Officials in the fifth state said they were actively exploring ways to increase participation in order to be able to spend their full allocation. As shown in figure 4, most of the funds allocated and spent as of March 31, 2012, have gone to helping unemployed homeowners make mortgage payments (66 percent of allocations and 76 percent of expenditures) or to reinstating delinquent mortgages (12 percent of allocations and 20 percent of expenditures). All 18 states and the District of Columbia have implemented programs to provide partial or full mortgage payments to borrowers who are unemployed. Some states, such as North Carolina and Indiana, have incorporated reinstatement components into their payment assistance programs. In addition, seven states have implemented separate reinstatement programs. However, the eligibility requirements for and terms of these programs vary across states. In some states, the borrower’s household income must be below a certain ceiling (for example, 120 percent of the area median income in California). Another state (New Jersey) has no maximum household income level, but the borrower’s monthly mortgage payment must be at least 31 percent of household income. Some states have expanded the eligibility requirements to reach more borrowers—for example, by adding a definition of underemployment and allowing underemployed borrowers to qualify for the program. Further, across states the length of time that borrowers can receive assistance can be as short as 9 months and as long as 36 months, while the maximum payment assistance an unemployed or underemployed borrower can receive ranges from $9,000 to $48,000. Several states we spoke with were considering or had already made changes to their program requirements in order to allow borrowers to receive more assistance than initially planned in an effort to disburse Hardest Hit Fund money more quickly. According to servicers we spoke with, these types of programs complement other foreclosure mitigation programs available to borrowers through federal and proprietary programs. States have also implemented other types of programs using Hardest Hit Fund funds, including principal reduction, second-lien reduction, and transition assistance. Through the first quarter of 2012, these programs represented 22 percent of funds allocated to borrower assistance but less than 5 percent of the states’ spending on such assistance. According to states and servicers we spoke with, these programs have been more difficult to implement widely because they generally require a greater level of involvement and decision making from servicers than other Hardest Hit Fund programs, such as payment assistance and loan reinstatement. In addition, the enterprises do not participate in Hardest Hit Fund principal reduction programs that require matching funds from investors or servicers. Because most of the states with principal reduction programs require matching funds, the pool of borrowers who are potentially eligible for these programs is limited. As of March 31, 2012, the states had spent $132 million on administrative costs for implementing the programs, representing more than a quarter of their total spending (see fig. 5). Treasury approves allocations for administrative expenses as part of the program agreements it makes with the states. As of March 2012, states had allocated about $864 million, or 11 percent of their funds, to administrative expenses. Two states (Nevada and New Jersey) spent more on administrative expenses than they did on borrower assistance (that is, administrative expenses were more than 50 percent of their total disbursements). Hardest Hit Fund officials in one state pointed out that their program faced large initial costs because they did not have the necessary infrastructure in place to implement it and therefore had to spend time and resources at the outset developing policies and procedures, leasing office space, and purchasing equipment. Officials from another state said that their high initial administrative costs were driven in part by up-front investments in technology they needed to make in order to implement the program. Treasury officials said that states had budgeted for initially high administrative expenses to cover start-up costs. State officials and Treasury staff told us that they expected administrative costs to fall after the programs were established. However, it is not yet clear whether states have spent all their budgeted start-up funds and transitioned to using ongoing administrative expenditures to cover program activities. For example, four states increased their cumulative administrative spending by more than 50 percent in the first quarter of 2012. In addition, several states have requested increases in their administrative budgets—for example, to hire additional staff to implement their programs. Although most states have spent less than 20 percent of their allocated administrative expenses and are not at risk of running out of administrative funds, efficient use of these resources will be important in order for the states to achieve their goal of assisting borrowers. In addition, Treasury’s rigorous oversight of spending decisions throughout the life of the program will be critical to helping ensure that funds are spent as intended. The states were slow to start disbursing funds for borrower assistance, in part because of challenges they faced in getting their programs up and running. In many cases, the state HFAs did not have direct experience administering the types of programs they were putting in place and had to learn as they went. Over time, they have been able to overcome some of the challenges they face, although others remain. In some cases, administering Hardest Hit Fund programs involved unexpected activities. For example, officials in Ohio said that they did not initially realize that they would need a call center or a closing unit to work with servicers to finalize agreements to provide borrower assistance. State officials had to identify the positions and skill sets that would be needed to administer their programs and decide whether to use existing HFA staff, hire new staff, or contract out certain functions. Florida officials stated that they were using both new and existing HFA staff to administer the Hardest Hit Fund programs, although not all of them were working on these programs fulltime. Nevada and Ohio officials told us they had hired new staff to perform functions specific to Hardest Hit Fund activities, while California officials told us they had outsourced most of the operational work to a third-party service provider. This company provides staff for a call center and for processing, underwriting, and fulfillment on behalf of the HFA. All of the states we spoke with were using local housing counselors to help with borrower intake. States are also challenged to make sure they have the right number of personnel to administer the program. Officials in one state noted that it was a challenge to determine how to scale up staffing (as well as systems, processes, facility needs, and technology infrastructure) that had been put in place for the initial Hardest Hit Fund allocation to accommodate the unexpected increase after Treasury nearly doubled all the states’ allocations in the final round of funding. Treasury officials told us that they monitored state staffing and capacity to help ensure that states were able to administer Hardest Hit Fund programs effectively. State officials we spoke to also faced challenges related to getting the needed infrastructure—office space, equipment, and information technology—in place to implement the program quickly. One concern of the states was getting a software and technology system in place to facilitate the application process. Some states developed their own systems, while others sought to identify existing products that could be used. According to one state official we spoke with, Treasury facilitated the sharing of best practices among the states, leading this state to adopt a system that other states had tried and found to work for their Hardest Hit Fund programs, which were similar in structure. This system, Counselor Direct, has been adopted by 11 of the 19 states, according to Treasury. While there have been some problems with the system, state officials told us that they had found Counselor Direct to be responsive to their needs. In general, the states we spoke to said that servicer participation had been a significant issue initially but that most servicers were now participating in the mortgage payment assistance and reinstatement programs. SIGTARP recently reported that states had some initial difficulty getting servicers—particularly large servicers—to agree to participate in their programs. These large servicers cited the administrative burden of implementing more than 50 programs in 19 different states. Further, Fannie Mae and Freddie Mac did not initially issue specific guidance to servicers about participating in the Hardest Hit Fund programs. However, Treasury later took action to facilitate participation by holding a national summit in September 2010 with the states, servicers, and the enterprises that resulted in some standardization of programs and communication methods. After the summit, the enterprises issued guidance in October 2010 directing servicers to participate in Hardest Hit Fund programs providing mortgage payment assistance or reinstating delinquent loans, and subsequently large servicer participation greatly increased. The lack of servicer participation in other types of programs, such as principal reduction and second-lien reduction, remains a challenge for states that offer those programs. Nevada officials said that they were having more success working with servicers on a case-by-case basis to reduce or eliminate second liens than they had trying to require servicers to sign formal agreements committing them to broad participation in their second-lien program. As we noted earlier, the enterprises do not permit servicers to participate in principal reduction programs that require matching funds from the investor or the servicer, as most Hardest Hit Fund principal reduction programs do. Without these loans, the number of borrowers these programs can assist is limited. In addition, Treasury officials and servicers we spoke with pointed out that the principal reduction programs required greater involvement from servicers to evaluate borrowers, something that servicers may not see as worthwhile given the relatively small scale of the Hardest Hit Fund programs. Further, given the requirements under the national mortgage settlement with the federal government and state attorneys general, the large servicers are more likely to focus on putting programs in place to meet those obligations. According to one servicer, it is easier to develop one solution that will satisfy the principal reduction requirements under the settlement than to try to incorporate the various Hardest Hit Fund principal reduction efforts. However, two states—Illinois and Oregon—are piloting different types of principal reduction programs that bypass the servicers. These programs involve buying the loans from the investor and then modifying or refinancing them to reduce the principal. State officials credited the regular conference calls that Treasury facilitated with spreading information about these programs. Several states, including Ohio and Florida, are waiting to see the outcomes of these pilot programs in order to determine whether to pursue them. Several states mentioned ongoing implementation challenges, in particular in the area of exchanging information with servicers. One of the barriers to servicer participation at the outset of the programs was the lack of standardization across state programs. One of the solutions that came out of the September 2010 national summit was the development of a common data file that all states and servicers would use to exchange information about borrowers and the assistance being provided. After the summit, Treasury and several servicers and states jointly developed the common data file. Initially, Treasury hosted a weekly teleconference with the states and servicers that has since changed to a monthly schedule, and any servicer or state can participate. Treasury has also overseen the formation of a committee to discuss problems with and proposed changes to the common data file. However, state officials told us that some problems continued to come up related to the common data file and the exchange of information. For example, they told us that servicers had differing interpretations of how certain fields should be completed. One state said that it had over 200 servicers participating in its Hardest Hit Fund program and that each one had its own idea of how to complete the fields. Servicers we spoke with said that states did not always provide complete or accurate information in a timely manner—for example, instructions for applying a payment to a borrower’s account were not always clear or complete. Treasury officials said that although these issues came up from time to time, the reduced frequency of the calls reflected the decreasing number of issues raised related to the common data file. Treasury officials told us that the data dictionary Treasury helped to create clarified much of the confusion relating to interpreting data fields. According to Treasury officials, several states have developed their own training materials for using the common data file, including Ohio, which has posted a tutorial on its website. Reaching the targeted population of eligible borrowers is another challenge states continue to face. Although broad marketing efforts help to raise awareness of the programs states offer, they also result in a large number of ineligible borrowers seeking assistance. For example, California officials said they had received many inquiries from borrowers about the state’s principal reduction program. However, a substantial proportion of these borrowers were not eligible because their servicer was not participating or they did not have a financial hardship but were merely seeking a way to reduce their principal balance. In contrast, targeted solicitations of distressed borrowers may not result in a high response rate. Part of the problem in those cases, according to Nevada officials, is that borrowers have been repeatedly warned about scams and are therefore skeptical about the solicitation and unwilling to respond. In some cases, borrowers may have made the decision not to seek assistance and instead live rent-free until the foreclosure process runs its course, which in Nevada can take 2 or 3 years. Florida officials said that they relied on housing counselors to help steer borrowers to the most appropriate program for their circumstances, including Hardest Hit Fund programs. The officials said that they had developed marketing materials that they distributed at events and to housing counselors. These materials have different codes that can be used to track referrals. This technique helps to identify the marketing channels that are most effective at reaching eligible borrowers. Treasury has incorporated the Hardest Hit Fund into its existing marketing and outreach activities. Treasury officials told us that they had invited the state HFAs to Treasury events in Hardest Hit Fund states, allowing the HFAs to make presentations about their programs and network with servicers and counselors. At some events, the states may even take applications for assistance. Treasury’s website managers have also exchanged information with HFAs on methods to improve their sites. Treasury officials said that Hardest Hit Fund marketing must be done locally because the programs differ from state to state and that these differences had prevented Treasury from developing a national campaign. Officials in the District of Columbia said that they had been successful in partnering with the department that administers unemployment benefits to obtain a list of those receiving unemployment benefits. By comparing the addresses of individuals who appear on that list with a list of properties receiving delinquency or foreclosure notices, they have been able to effectively target their efforts to a relatively small population of borrowers who are potentially eligible. According to Treasury officials, other states have had similar success working across departments in their state governments. Hardest Hit Fund officials in California told us they were able to partner with the state office administering unemployment benefits to mail out information on the Hardest Hit Fund unemployment program. As a result, nearly 10,000 homeowners were identified as eligible for the program. Finally, state officials told us that they tracked the reasons borrowers who were reached did not qualify for Hardest Hit Fund programs, an effort that helped them identify borrowers in need of assistance who were ineligible for it because they did not meet certain requirements. Officials in California and Ohio said that the state uses these efforts to evaluate the Hardest Hit Fund program requirements. As a result, they have been able to propose changes to their programs to better reach borrowers who need assistance. Treasury has established procedures to oversee the implementation and performance of states’ Hardest Hit Fund programs but has opportunities to improve both its monitoring and program transparency. Treasury officials approve state Hardest Hit Fund programs and review and approve all proposed changes to help ensure that the programs address the goals laid out in EESA. When states propose changes to their programs—for example, changing eligibility requirements, reallocating funds, or adding or subtracting programs—they must submit amendments to their agreements with Treasury for its approval. Treasury’s Hardest Hit Fund program staff review the changes and supporting rationale to ensure that the changes are consistent with the principles laid out in EESA. Although some state officials we spoke with did not have concerns about Treasury’s process for reviewing proposed amendments, they also told us that they were not aware of specific criteria beyond consistency with EESA that Treasury used to determine whether to approve the proposals or request changes. Treasury officials told us that they did not have prescriptive guidelines (other than EESA), because the intent of the program was to let states develop innovative solutions to the problems they faced. When the amendment involves an increase in the amount allocated to administrative expenses, state officials must state how the additional funds will be spent. A committee of officials representing various parts of OFS reviews and approves the proposed amendment, which the state and Treasury often discuss in detail. The magnitude of the changes, as well as whether another state has proposed something similar, can affect how long it takes Treasury to review and approve them. Generally, state officials told us that Treasury had been very responsive to requests for program changes, often getting changes approved in a matter of weeks or even days. Treasury has established several layers of review and reporting to monitor the states’ Hardest Hit Fund activity: annual compliance reviews conducted by OFS compliance staff; required annual financial and internal controls audits performed by independent third parties; quarterly performance and financial reporting to Treasury, with the performance reports posted on the HFAs’ websites; and monthly progress reports submitted directly to Treasury. Annual compliance reviews. The compliance team from OFS spends one week on site at each HFA. These reviews examine the HFAs’ internal controls, eligibility determinations, program expenses, administrative expenses, and reporting. The first round of compliance reviews was scheduled to be completed by September 2012, with the second round to be completed in 2013. Treasury has developed a database to track items identified in the first round of compliance reviews, and officials told us they were working to populate the database with information from the compliance review reports that had already been completed. Officials in one state who had recently completed an initial compliance review said that they found the process to be transparent and helpful. Treasury staff provided them with a list of documents they needed and a schedule of interviews with HFA staff. One other state told us that the compliance review and findings were very helpful and that it had taken steps to implement Treasury’s recommendations. Treasury stated that the compliance reviews discovered issues that were largely one-time problems—for example, control failures involving undocumented fee schedules or unrecorded approvals. States generally correct these types of issues on the spot, according to Treasury officials. Annual financial and internal controls audits. As outlined in the agreements with Treasury, states must submit annual audited financial statements. Treasury has directed the states to post these publicly on their websites. In addition, the states must certify that they have an effective internal control program and must have a third party independently verify the effectiveness of their internal control programs on an annual basis. According to Treasury officials, although states certified that their internal control programs were effective during the first year, many of them did not get the independent third-party verification. Treasury officials told us that they had been addressing this issue by emphasizing the need for states to have their internal control systems verified in the first round of compliance reviews. Quarterly performance and financial reports. Under the agreements they signed with Treasury, the states are required to submit quarterly performance and financial reports to Treasury and post the performance reports on their websites. These performance reports follow a standardized format specified by Treasury and detail borrower characteristics and program outcomes. Treasury’s Hardest Hit Fund program staff review states’ performance relative to the goals they have established and discuss any challenges the states are facing in reaching their goals. According to Treasury officials and state officials we spoke with, the performance measures that they focus on include denial rates and the percentage of completed applications that receive assistance. State officials also look at the percentage of applications started that are completed. As more borrowers transition out of the program, states will focus more on outcome measures, such as the percentage of borrowers that are able to retain their homes 6, 12, and 24 months after receiving assistance. The financial reports are submitted directly to Treasury, but there is no standardized format for them. Treasury officials said that states are required to submit responses to seven standard questions, including requests for the total administrative expenses for the quarter and cumulative administrative expenses, and must reconcile the financial reports to the quarterly performance reports. Monthly progress reports. The monthly reports outline the activities each state undertook that month and the amounts spent on borrower assistance and administrative expenses. According to Treasury officials, the monthly reports are less formalized than the quarterly performance reports and allow the states to provide qualitative information about their programs. Treasury’s Hardest Hit Fund staff discusses the contents of the progress reports with each state at least quarterly (monthly if there are any performance concerns). Even with these efforts, Treasury’s monitoring of administrative expenses incurred by the states is limited by the lack of consistency in states’ reporting. Treasury has built controls into the system that states use to draw down funds that prevent states from requesting draws for administrative expenses that exceed the approved amount. Similarly, Treasury has developed analytical tools to track administrative expenses and the rate of spending overall. Treasury officials told us that they had compared the rate of spending against state administrative expense budgets that detailed expected spending over time. However, Treasury has not standardized the format in which states are to provide administrative expense data, limiting Treasury’s ability to compare spending patterns across states and identify areas requiring greater oversight. In addition, Treasury does not require states to submit detailed reports of administrative expenses by category that would allow for a comparison of actual expenses and the administrative budgets the states submitted as part of their agreements with Treasury. According to Treasury, administrative expenses are not easily comparable across states because of differences in programs and their structures. However, having states report this information to Treasury in a consistent format could provide greater insight into states’ progress in implementing the Hardest Hit Fund and inform Treasury’s oversight and monitoring decisions. Standards for internal control state that operational and financial data are necessary for program managers to determine whether the programs are meeting goals and effectively and efficiently using resources. Further, effective internal control systems provide reasonable assurance to taxpayers that federal funds are used as intended and in accordance with applicable laws and regulations. Without detailed and consistent information on the types of administrative expenses states have incurred relative to their plans for the program, Treasury may be constrained in its ability to monitor (1) whether program funds are being used effectively to achieve program goals and (2) the relationships among program expenses, activities, outputs, and outcomes. Further, the transparency of the status of the Hardest Hit Fund and states’ performance could be enhanced. Although the quarterly performance reports that detail the number of borrowers assisted and the total amount of assistance the states provide are publicly available, Treasury does not require states to publicly disclose the administrative expenses they incur to implement the Hardest Hit Fund as part of the reporting. Treasury officials told us that they informed the states in a recent teleconference that this information would be required to be reported in the quarterly performance report for the third quarter of 2012. In addition, Treasury does not aggregate the quarterly performance and financial data it receives to provide policymakers and the public with a snapshot of the Hardest Hit Fund’s status. Treasury also has not made available to the public consolidated reports on the states’ relative performance when activities and performance measures are comparable across states—for example, under the payment assistance or reinstatement programs— although Treasury officials said that they provided consolidated reports to the states on a quarterly basis and to policymakers on request. As we have previously reported, transparency remains a critical element in the context of TARP and the unprecedented government assistance it has provided to the financial sector. Such transparency could help clarify for policymakers and the public the costs of Hardest Hit Fund assistance and increase understanding of Hardest Hit Fund results. Improving the clarity of communications about the costs and performance of Hardest Hit Fund would help to inform decisions about how best to target remaining funds to achieve program goals. HAMP, the Hardest Hit Fund, and the newer MHA programs were part of an unprecedented response to a particularly difficult time for our nation’s mortgage markets. But 3 years after Treasury first announced that it would use up to $50 billion in TARP funds for various programs intended to preserve homeownership and protect home values, the number of borrowers who received permanent HAMP first-lien modifications is far below Treasury’s original estimates of the number of people who would be helped by this program. The number of borrowers starting HAMP trial modifications has continued to decline. In an effort to boost participation, Treasury recently rolled out HAMP Tier 2 to extend and expand the program. However, Treasury has made no definitive projections of the number of borrowers who might be helped. The program has not been fully implemented, and servicers have mixed opinions on its possible effect. The recent changes are a positive step in the effort to reach borrowers who have previously been denied HAMP assistance, but the pool of eligible borrowers is diminishing over time. Further, Treasury has taken steps to assess and facilitate servicers’ readiness, but several of the large servicers did not have the system changes in place to process all aspects of HAMP Tier 2 modifications by June 1, 2012. As we have noted in past reports, swift action on the part of Treasury is imperative to help ensure that servicers have the ability to implement new initiatives. As demonstrated by the initially slow rollout of the HAMP first-lien modification program, the success of these TARP- funded initiatives will be largely driven by the capacity and willingness of servicers to implement them expeditiously and effectively. Servicers could be hampered by the myriad programs they currently must deal with, including the settlement reached with the state attorneys general. Treasury has established performance measures to assess servicers’ compliance with MHA program requirements and identified certain risks associated with the recent changes, but it has not provided meaningful performance goals or comprehensive risk assessments for HAMP Tier 2. As we previously reported, agencies must identify the risks that could impede the success of new programs and determine meaningful methods of mitigating these risks. We have highlighted the need for Treasury to develop necessary controls to mitigate those risks before a program is implemented. Without the more meaningful risk assessments, Treasury will not be able to fully and effectively use the nearly $46 billion in TARP funds that it has obligated to meet the statutory goals of protecting homeownership because of the possibility of increased redefaults or other risks that could impede the success of the new program changes. In addition, Treasury has not developed program-specific performance measures for HAMP Tier 2. Without specific program measures, Treasury will not be able to effectively assess the outcomes of these programs and hold servicers accountable for performance goals. Treasury has established several layers of review and reporting to monitor the states’ Hardest Hit Fund activity, but its oversight and monitoring of state administrative expenses for the Hardest Hit Fund are limited, and the administrative expenses associated with these programs are not transparent. Further, Treasury has not published consolidated state performance reports and financial reports, including administrative expenses incurred, limiting the ability of policymakers and the public to assess the status of the program and each state’s performance relative to other states. Without this information, policymakers and the public will have difficulty evaluating whether the Hardest Hit Fund program is achieving its goals in an effective manner. In order to continue improving the transparency and accountability of MHA and the Hardest Hit Fund programs, we recommend that the Secretary of the Treasury take the following three actions: expeditiously conduct a comprehensive risk assessment of HAMP Tier 2, using the standards for internal control in the federal government as a guide; develop activity-level performance measures and benchmarks related to the HAMP Tier 2 program; and consolidate the state performance reports and financial reports, including administrative expenses, into a single Hardest Hit Fund report to provide policymakers and the public with the overall status of the program as well as the relative status and performance of the states’ efforts. We provided a draft of this report to Treasury and FHFA for review and comment. FHFA provided the draft report to Fannie Mae and Freddie Mac. We received written comments from Treasury’s Assistant Secretary for Financial Stability that are reprinted in appendix III. We also received technical comments from Treasury, FHFA, and Fannie Mae that we incorporated as appropriate. In its written comments, Treasury did not state whether it agreed or disagreed with our recommendations but noted that it would respond in detail in its 60-day response letter to Congress. However, Treasury stated that it took exception to our finding that it did not conduct appropriate risk assessments prior to the implementation of HAMP Tier 2. Specifically, Treasury noted that at the outset of the development of HAMP Tier 2, it performed a baseline assessment of the potential programmatic, technical, fraud, and other risks involved and listed several activities it undertook during this assessment. In the draft report, we acknowledged that Treasury identified various risks while designing the program—such as the redefault risk associated with modifications that would result in DTIs of up to 55 percent—and described the actions Treasury cited as mitigating those risks. We also described many of the activities Treasury outlined in its comment letter related to the design and implementation of the program. However, during our review Treasury was unable to provide documentation of any risk assessments that had been performed during the development of HAMP Tier 2. After receiving a draft of this report, Treasury prepared a summary table that outlined examples of risks it had identified and actions it had taken to mitigate them. We used this information to incorporate additional examples into the report. However, neither this summary nor Treasury’s description of its analysis indicated that it had conducted a comprehensive analysis of these risks, including an assessment of their significance and likelihood of occurrence, as outlined in our standards for internal control. Without this type of detailed information, determining whether the mitigating actions outlined by Treasury are sufficient or comprehensive is difficult. In its comment letter, Treasury stated that a more formal assessment might be more appropriate for programs that were fully operational and had established processes that were reasonably mature. As we have previously reported and reiterate in this report, agencies must identify the risks that could impede the success of new programs, determine appropriate methods of mitigating these risks, and develop appropriate controls before the programs’ implementation dates. As a result, our position remains that Treasury must complete a comprehensive risk assessment that analyzes the significance and likelihood of occurrence of the risks it has identified in order to provide reasonable assurance that appropriate and meaningful steps have been taken to mitigate risks associated with HAMP Tier 2. We have clarified our recommendation to reference federal standards for internal control as guidance regarding key aspects of a comprehensive risk assessment. We are sending copies of this report to interested congressional committees and members of the Financial Stability Oversight Board, Special Inspector General for TARP, Treasury, FHFA, the federal banking regulators, and others. We also will make this report available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or sciremj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. In response to a mandate in the Emergency Economic Stabilization Act of 2008, this report examines (1) steps the Department of the Treasury has taken to design and implement recent changes to the Making Home Affordable (MHA) programs and (2) Treasury’s monitoring and oversight of state housing finance agencies’ (HFA) implementation of the Housing Finance Agency Innovation Fund for the Hardest Hit Housing Markets (Hardest Hit Fund). To examine Treasury’s implementation of recent changes to MHA programs, we reviewed internal documentation related to the decision- making process. We also obtained and analyzed Treasury’s Home Affordable Modification Program (HAMP) data in its system of record, Investor Reporting/2 (IR/2), through March 2012, to identify patterns in program participation, and we determined that these data were sufficiently reliable for the purposes used in the report. We also reviewed MHA documentation issued by Treasury, including the supplemental directives related to the recent changes related to HAMP Tier 2 as well as the Principal Reduction Alternative and Second Lien Modification Program incentives; the MHA handbook for servicers; and monthly performance reports. We reviewed and analyzed MHA program and expense information in the quarterly reports to Congress issued by the Special Inspector General for the Troubled Asset Relief Program (SIGTARP). We also spoke with officials at Treasury to understand the challenges faced in implementing these programs and the steps taken by Treasury to assess the capacity needed for and risks of these programs, as well as steps taken to measure the programs’ success. Further, we spoke with management staff from five large MHA servicers about the challenges and potential impact of implementing these program changes. These five servicers were Bank of America; CitiMortgage; JP Morgan Chase Bank; Ocwen Loan Servicing; and Wells Fargo Bank. We identified them as large MHA servicers based on the amount of Troubled Asset Relief Program (TARP) funds they were allocated for loan modification programs. These five servicers collectively represented about 68 percent of the TARP funds allocated to participating servicers as of March 31, 2012. We also spoke with an organization representing homeowners and community advocates about the potential impact of implementing these program changes. Finally, we reviewed (1) the Standards for Internal Control in the Federal Government to determine the key elements needed to ensure program stability and adequate program management; (2) Treasury’s strategic plan, monthly reports, and quarterly servicer assessments to determine the goals, strategies, and performance measures for the MHA program; and (3) leading practices for program management under the Government Performance and Results Act of 1993 (GPRA) and the requirements of the GPRA Modernization Act of 2010. To examine Treasury’s oversight and monitoring of the states’ implementation of the Hardest Hit Fund, we reviewed Treasury’s funding announcements for the Hardest Hit Fund as well as program participation agreements between the states and Treasury and subsequent amendments to those agreements; quarterly performance reports submitted by the states; analytical tools developed by Treasury to track program spending for borrower assistance and administrative costs; and examples of compliance reviews completed by Treasury and the states’ responses. We also spoke with officials at Treasury to understand the challenges faced in implementing these programs and the steps taken by Treasury to assess the capacity needed for and risks of these programs, as well as steps taken to measure the programs’ success. Further, we spoke with management staff from four states that received allocations through the Hardest Hit Fund—California, Florida, Nevada, and Ohio— and the District of Columbia. To select states to interview, we considered the size of the state’s allocation, the number of Hardest Hit Fund programs administered by the state, the percentage of the allocation that had been drawn as of December 2011, the borrower approval rate, and the geographic location. We also spoke with mortgage industry participants and observers, including servicers and associations representing housing counselors and legal services attorneys. We conducted this performance audit from February 2012 through July 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on the audit objectives. Recommendation As part of its efforts to continue improving the transparency and accountability of HAMP, the Secretary of the Treasury should place a high priority on fully staffing vacant positions in the Homeownership Preservation Office (HPO)—including filling the position of Chief Homeownership Preservation Officer with a permanent placement— and evaluate HPO’s staffing levels and competencies to determine whether they are sufficient and appropriate to effectively fulfill its HAMP governance responsibilities. Actions taken Treasury hired a permanent Chief Homeownership Preservation Officer on November 9, 2009. Based upon input from HPO senior staff, the Chief Homeownership Preservation Officer subsequently reduced the staffing levels for HPO. In June 2012, Treasury officials stated that a comprehensive staffing assessment was ongoing for all of the Office of Financial Stability, including HPO. Recommendation As part of its efforts to continue improving the transparency and accountability of HAMP, the Secretary of the Treasury should expeditiously implement a prudent design for remaining HAMP-funded programs. Actions taken Our March 2011 report identified areas in which Treasury had made changes to the original design and requirements of the more newly announced HAMP-funded programs (i.e., Second Lien Modification (2MP), Home Affordable Foreclosure Alternatives (HAFA), and Principal Reduction Alternative (PRA) programs) and made recommendations to continue improving the transparency and accountability of Making Home Affordable (MHA) related to these newer programs. Those recommendations remain open. As part of its efforts to continue improving the transparency and accountability of HAMP, the Secretary of the Treasury should expeditiously finalize and implement benchmarks for performance measures under the first- lien modification program, as well as develop measures and benchmarks for the recently announced HAMP-funded homeowner assistance programs. Starting with the MHA program performance report through April 2011, Treasury has publicly reported on the performance of the top 10 participating servicers in three categories—identifying and contacting homeowners, homeowner evaluation and assistance, and program management, reporting, and governance. Treasury has established benchmarks for each of these three categories that consist of both quantitative and qualitative (incorporating the results of its compliance reviews) criteria. However, the performance metrics are based on the HAMP first-lien modification program and do not contain measures or benchmarks for the more recently announced TARP-funded homeowner assistance programs. Recommendation As part of its efforts to continue improving the transparency and accountability of HAMP, the Secretary of the Treasury should expeditiously report activity under the principal reduction program, including the extent to which servicers determined that principal reduction was beneficial to investors but did not offer it, to ensure transparency in the implementation of this program feature across servicers. Actions taken Starting with its monthly MHA performance report for activity through May 2011, Treasury began reporting summary data on the PRA program. Specifically, Treasury provides information on PRA trial modification activity (started, cumulative, and permanent), as well as the median principal amounts reduced for active permanent modifications. In addition, beginning with its MHA performance report for activity through October 2011 and quarterly thereafter, Treasury reported more detailed data on the characteristics of loans that received PRA modifications. In June 2012, Treasury officials stated that they had been working with servicers to improve the quality of the data provided on PRA and were undertaking additional research to look at the effectiveness. However, no data are reported on the extent to which servicers determined that principal reduction was beneficial to the investor but was not offered. As part of its efforts to continue improving the transparency and accountability of HAMP, the Secretary of the Treasury should expeditiously and more clearly inform borrowers that the HOPE Hotline may also be used if they are having difficulty with their HAMP application or servicer or feel that they have been incorrectly denied HAMP; monitor the effectiveness of the HOPE Hotline as an escalation process for handling borrower concerns about potentially incorrect HAMP denials; and develop an improved escalation mechanism if the HOPE Hotline is not sufficiently effective. According to Treasury, it has promoted the HOPE Hotline through a number of channels to the public as a resource for borrowers with questions and problems about their HAMP application, trial period plan or permanent modification. For example, the hotline number is published on Treasury’s MHA website, featured in media campaigns, and used in talking points for borrower/counselor events and media interviews. Treasury’s MHA program guidelines require that servicers include in their notices to borrowers regarding the status of requests for a HAMP loan modification the telephone number for the HOPE Hotline, with an explanation that the borrower can seek assistance at no charge from HUD-approved housing counselors and can request assistance in understanding the Borrower Notice by asking for MHA Help. In MHA program guidance issued on November 3, 2010, Treasury standardized the process required for handling certain borrower inquiries and disputes related to the MHA Program. The guidance also outlines the servicer’s obligations for tracking borrower inquiries and disputes and conducting reviews in a timely fashion, whether received directly from a borrower or indirectly from the HOPE Hotline, through MHA Help, or the HAMP Solution Center. However, Treasury has not yet indicated how it will monitor the effectiveness of the HOPE Hotline as an escalation process for handling borrower complaints about potentially incorrect HAMP denials. Recommendation As part of its efforts to continue improving the transparency and accountability of MHA, the Secretary of the Treasury should require servicers to advise borrowers to notify their second- lien servicers once a first lien has been modified under HAMP to reduce the risk that borrowers with modified first liens are not captured in the Lender Processing Services (LPS) matching database and, therefore, are not offered second-lien modifications. Actions taken In Supplemental Directive 11-10 issued on September 29, 2011, Treasury announced that servicers must inform each borrower who receives a HAMP permanent modification of the borrower’s potential eligibility for a second-lien modification under 2MP. Treasury updated the Home Affordable Modification Agreement Cover Letter form to include model clauses that could be used to notify borrowers, including a link to the MHA website to determine whether the second-lien servicer was participating in 2MP and a statement encouraging the borrower to contact the second-lien servicer if the servicer did not contact the borrower within 60 days. As part of its efforts to continue improving the transparency and accountability of MHA, the Secretary of the Treasury should ensure that servicers demonstrate that they have the operational capacity and infrastructure in place to successfully implement the requirements of the 2MP, HAFA, and PRA programs. Treasury stated that Freddie Mac’s MHA-Compliance unit, the compliance agent for the Making Home Affordable program, uses information received from Fannie Mae, in its capacity as the MHA program administrator, regarding servicer readiness for various program elements as part of the compliance review scheduling and planning process. Treasury noted that during the normal course of a servicer review, part of the review is focused on the evaluation of new programs such as HAFA, 2MP, and PRA as they are implemented by a servicer. According to Treasury, the specifics of these evaluations are designed to ensure adherence with the program guidelines, as well as with the servicer’s ability to meet those guidelines. Treasury stated that in instances in which a servicer had implementation challenges and was unable to meet implementation timelines or specific elements of the program, these matters would be raised to OFS management and tracked to resolution by MHA-Compliance to ensure that implementation occurred as soon as practicable. As part of its efforts to continue improving the transparency and accountability of MHA, the Secretary of the Treasury should consider methods for better capturing outcomes for borrowers who are denied or canceled or have redefaulted from HAMP, including more accurately reflecting what actions are completed or pending and allowing for the reporting of multiple concurrent outcomes, in order to determine whether borrowers are receiving effective assistance outside of HAMP and whether additional actions may be needed to assist them. Treasury stated that it had revised the survey it conducted of the 10 largest MHA servicers regarding the disposition of borrowers who had been denied HAMP modifications or were cancelled from trials to ask about dispositions of borrowers who were “in process” and “completed” to clarify their status. Treasury stated that it was important to note that survey data were generally collected for at least 3 months prior to publication to ensure the integrity of the data. Therefore, the changes made to the survey are not currently reflected in the data contained in the monthly MHA program performance reports. Treasury stated that it anticipated that it would be able to begin reporting using the revised survey data in fall 2011. However, Treasury stated that it did not intend to revise its survey to collect data on borrowers that were being considered for multiple outcomes. Treasury stated that while borrowers could be under evaluation for an alternative modification while in foreclosure, the greatest impact would be the final determination (e.g., whether the borrower received an alternative modification or was in the foreclosure path). In addition to the contact named above, Harry Medina (Assistant Director), Dan Alspaugh, Don Brown, Emily Chalmers, John Karikari, Marc Molino, Jill Naamane, Andrew Stavisky, Eva Yikui Su, James Vitarello, and Henry Wray made key contributions to this report.
More than 3 years have passed since Treasury made up to $50 billion available to help struggling homeowners through the MHA program, and foreclosure rates remain near historically high levels. Further, more than 2 years after Treasury set up the Hardest Hit Fund to help homeowners in high-unemployment states, much of the money remains unspent. The Emergency Economic Stabilization Act of 2008, which authorized Treasury to create TARP, requires GAO to report every 60 days on TARP activities. This 60-day report examines (1) the steps Treasury took to design and implement recent changes to MHA, and (2) Treasury’s monitoring and oversight of states’ implementation of Hardest Hit Fund programs. To address these questions, GAO analyzed data and interviewed officials from Treasury, five selected Hardest Hit Fund states, and five large MHA servicers. The Department of the Treasury announced changes in January 2012 to its Making Home Affordable (MHA) programs, which are funded by the Troubled Asset Relief Program (TARP), to address barriers to borrower participation. These changes include expanding eligibility criteria and extending application deadlines through 2013. Not enough time has passed to assess the extent to which these changes will increase participation. Several large servicers were not able to fully implement the changes by the June 1, 2012, effective date, and servicers that GAO queried had mixed views about possible effects. Treasury consulted with servicers, investors, and federal banking regulators before implementing the changes but did not perform a comprehensive risk assessment for the changes or develop meaningful performance measures in accordance with standards for internal control. As a result, Treasury may have difficulty mitigating potential risks, such as an increase in redefaults or the misuse of funds; effectively assessing program outcomes; or holding servicers accountable. After a slow start, states increased their spending on borrower assistance under the Housing Finance Agency Innovation Fund for the Hardest Hit Housing Markets (Hardest Hit Fund). The assistance provided as of March 2012 totaled about 5 percent of the $7.6 billion allocation. All but one state that GAO spoke to anticipated spending their full allocations, and all noted that with Treasury’s help they had dealt with challenges related to staffing, infrastructure, servicer participation, borrower outreach, and program implementation. Treasury officials said that they expected initial administrative spending to be high as states established their programs, and 27 percent of states’ total spending was for administrative expenses as of March 2012. Treasury officials stated that states would be required to report publicly on administrative costs beginning with the third quarter of 2012. Treasury has been monitoring states’ performance and compliance but has not reported consolidated performance and financial data (including administrative expenses) for the programs. The lack of consolidated reporting of performance and financial data limits transparency and efforts to ensure that resources are used effectively to achieve program goals. Treasury should (1) expeditiously assess the risks associated with the recent changes to MHA and develop activity-level performance measures for each program, and (2) consolidate the states’ Hardest Hit Fund performance and financial data, including administrative expenses, into a single public report. Treasury neither agreed nor disagreed with the recommendations but took exception to the finding that it did not conduct a comprehensive risk assessment prior to implementing the MHA program changes. In response, GAO provided examples of key components of a comprehensive risk assessment that Treasury had not addressed.
The State Department operates over 160 embassies and over 100 consulates at a cost of about $2 billion annually. The embassies perform diplomatic and consular functions and provide administrative support for other U.S. agencies. State employs over 7,300 U.S. Foreign Service officers, about 10,000 Foreign Service nationals, 650 U.S. contractors, and 30,000 Foreign Service national contractors. Worldwide, embassies manage about $600 million worth of personal property, procure about $500 million in goods and services annually, and share management responsibilities for about $12 billion in housing and other real properties. Embassies also have responsibility for over $2 million annually in accounts receivable, such as medical expenses. For decades, long-standing management deficiencies have weakened administrative operations at the embassies, and millions of dollars remained unnecessarily vulnerable to fraud, waste, and abuse. We have previously criticized State’s deficient controls over embassies’ personal and real property, cashiering operations, contract administration, and training. In July 1993, we testified that management deficiencies continued to plague embassies’ operations. We suggested that each embassy establish a formal management improvement program to ensure sound management practices by documenting problems and monitoring corrective actions. For years, Congress has been concerned about State’s reluctance to address management and internal control problems that have historically reduced the effectiveness of its operations. In its November 1993 report, the House Committee on Government Operations stated that State should implement our recommendation that each embassy adopt a formal management improvement plan. On the basis of prior reviews by us and State’s Office of the Inspector General (OIG), the Committee also recommended that State (1) strengthen controls over personal property, (2) ensure that appropriate training is available for U.S. and foreign service national personnel, (3) implement contracting and procurement improvements, (4) eliminate control problems in cashiering functions, and (5) develop systems to track and collect medical insurance reimbursements. State has not implemented our suggestion that all posts establish formal management improvement programs to identify and correct deficiencies. State officials believe that their approach of targeting specific areas for improvement is more appropriate and achieves comparable results in the long term. We continue to believe that if State were to use existing mechanisms for managing embassy operations, such as the Mission Program Plan, it could more quickly and easily achieve the intent of our 1993 recommendation. (See app. I.) State has responded to recommendations contained in the House Committee on Government Operations’ report by initiating some specific actions designed to improve its management over embassy operations. These actions, although steps in the right direction, do not go far enough to ensure that each embassy is improving its operations. We and the State’s OIG continue to find deficiencies in (1) controlling personal property; (2) training for U.S. and foreign service national personnel; (3) contracting and procurement practices; (4) poor controls over cashiering functions; (5) medical insurance reimbursements; and (6) senior-level oversight of operations. In November 1993, the House Committee on Government Operations recommended that the State Department take the following actions to strengthen controls over personal property: establish more stringent procedures and guidance for receiving and issuing personal property overseas; improve the nonexpendable property application software to enhance reconciliation capability; provide increased and specialized training for Foreign Service officers and revise volume 6 of the Foreign Affairs Manual to require property officers to retain inventory records and other pertinent documentation in post files for 3 years; and adopt a zero-tolerance policy with respect to personal property losses. In July 1993, before the Committee’s report, State updated volume 6 of the Foreign Affairs Manual to include revised personal property regulations for all diplomatic and consular posts. This updated guidance incorporated changes in assigned responsibilities and federal regulations. The revised regulations also clarified accountability criteria for ensuring internal controls. On the basis of the new regulations, State’s Property Management Branch, which is responsible for central oversight for domestic and overseas personal property management, issued an instruction handbook that was intended to be an easy reference for posts to ensure compliance with management of personal property overseas. However, branch officials acknowledged that a number of posts were still not in compliance. During fiscal year 1994, branch staff visited 20 of the 260 posts to verify their annual inventory certification. Branch officials said that 14 posts failed to provide documentation that physical inventories were conducted. Although posts that do not provide inventory certifications can be subject to a withholding of funds for personal property acquisitions, and individuals that either refused to certify or falsely certified inventories can be subject to punitive actions, we found no instances in which money was withheld or individuals were sanctioned for not following property management procedures. According to State, it has a zero tolerance policy on personal property loses for fraudulent behavior, but it does not believe it to be in the taxpayers’ interest to pursue small shortages; therefore, in November 1993, it adopted a 1-percent tolerance. State adopted this policy because the 1-percent level is commensurate with that of private industry and State officials believed that the cost to pursue shortages of less than 1-percent would outweigh any benefits. State officials said they required posts to submit to headquarters the amount of losses incurred in fiscal year 1994. Of the 160 posts that submitted such information, only 15 exceeded the 1-percent level. In 1989, to improve property management and accountability, State integrated an inventory reconciliation software package with its non-expendable property application (NEPA) software at about 210 of the overseas posts. State is testing a new application of NEPA, but it has not yet determined how NEPA and other subsidiary systems will function with the planned Integrated Financial Management System. In August 1994, we reported that this system was at a high risk of failure because of State’s inadequate management and planning and therefore might not solve long-standing financial management and internal control problems. The Committee recommended that State train both U.S. Foreign Service officers and Foreign Service national employees in the areas of procurement and acquisition, real property management and maintenance, personal property, and budget and fiscal responsibilities. State’s training arm—The Foreign Service Institute—offers training in most of these areas. State acknowledged that, in some cases, Foreign Service officers report to posts without such training. And, according to the Director of the Office of Foreign Service National Personnel, training Foreign Service nationals is not a priority because of the high costs involved in bringing Foreign Service nationals to Washington, D.C. Although State says it has focused on increasing its regional training of Foreign Service nationals, those we interviewed said that training was still limited, often not timely, and generally not offered in their native language. Of the seven posts we visited, only Paris had formal training programs that identified or provided opportunities for the training requirements of Foreign Service nationals or officers. State is exploring ways to increase the role of Foreign Service nationals in administrative operations overseas. However, the Foreign Service Institute does not have a formal plan in place to ensure that Foreign Service nationals receive adequate training. Transferring more responsibility to Foreign Service nationals without proper training is likely to weaken compliance with internal controls. In 1993, the Committee recommended a number of actions to improve contracting and procurement practices. These included (1) requiring training for all Foreign Service officers and Foreign Service nationals responsible for contracting and procurement, (2) developing and implementing a procurement management information system that includes overseas procurement operations, (3) requiring each post to fully implement the worldwide procurement data system and provide each with appropriate software, (4) requiring each post to appoint a competition advocate and establish a competition advocacy program, and (5) requiring posts to develop advance acquisition plans each fiscal year. To address the need for procurement training, State established new training requirements for contracting officers, including training seminars for about 100 employees at seven regional centers. However, only 150 of the 700 officers overseas have received required training for standard contracting authority up to $250,000. The rest of State’s overseas contracting officers have provisional contracting authority up to $100,000.Procurement officials estimate that it will take many years before all of these officers complete their training. In addition, some Foreign Service nationals responsible for maintaining contracting files indicated that they were not adequately trained. For example, one Foreign Service national told us she had been involved in procurement actions for 6 years before receiving formal training. State developed a worldwide procurement database to meet the minimum legal and regulatory overseas procurement reporting requirements. This database is currently in use at 193 (or 73 percent) of the 265 overseas posts. This database, however, only reports the number and types of contract actions. It is not used to manage, monitor, or ensure control over embassy procurement operations. Most of the posts we visited had not established a competition advocacy program called for by the Committee. The lack of such a program contributed to the failure of some posts to fully compete or review their contract actions and prepare and maintain required documentation. None of the posts had a written policy to advertise solicitations or had evidence that solicitations were authorized. Also, most posts did not maintain a current vendor list, and therefore, could not be assured that all potential sources had been solicited. Several of the embassy officials we met with said they had not received or could not locate headquarters’ guidance stipulating the need of advance acquisition planning. In addition, none of the officials had developed an advance acquisition plan ranking essential procurements. Embassy cashiers are responsible for the day-to-day payment, collection, deposit, and reconciliation of funds advanced by regional disbursement centers. Cashiering operations are supervised by U.S. disbursing officers located at those centers. To improve controls over cashiering, in 1993, the Committee recommended that State fully fund the implementation of a worldwide standardized and integrated financial management system, adopt standardized accounting systems, increase monitoring and oversight of overseas cashiering operations, improve oversight of U.S. disbursing officers operations to ensure that transactions and accounts are properly recorded and reconciled, and require all posts to train staff in safeguards and procedures to prevent theft or misuse of official funds. State has not fully implemented the computerized Integrated Financial Management System; therefore, controls over cashiering continue to be manual and dependent on noncompliant financial systems in the majority of overseas posts. Although reconciliations are required monthly at overseas post, only about one-third of embassies’ cashiering operations are currently reviewed each year by external review teams from the State’s Financial Service Centers. Headquarters officials said that losses have been minimal, but acknowledged that major problems could occur. To gain control over disbursing operations overseas, State has centralized 18 of 19 disbursing operations with its 3 regional administrative management centers and plans to relocate the 1 remaining operation (Brasilia). State’s Deputy Chief Financial Officer and Deputy Assistant Secretary for Finance initiated this action to improve oversight and management controls over disbursing. State also created the Office of Overseas Financial Management and Oversight under the Chief Financial Officer. However, officials from this office said that fiscal irregularities were continuing due to (1) the lack of trained U.S. Foreign Service officers and nationals on cashiering practices, (2) negligence, and (3) malfeasance. To address the Committee’s recommendation to train staff on financial controls, in June 1994, at the Regional Administrative Management Center in Mexico City, State trained about 40 budget and fiscal officers and 40 supervisory Foreign Service nationals from the posts in Mexico on safeguards and procedures to prevent theft or misuse of funds. However, State officials said more regional training was needed for the hundreds of Foreign Service nationals supporting State’s budget and fiscal operations overseas. In 1993, the Committee recommended that State (1) develop and implement systems that identify and report on overseas medical expenses paid, claims filed, and amounts reimbursed to the government and (2) require all Foreign Service officers serving overseas to carry private medical insurance. State’s Office of Medical Services now assigns an obligation number for each medical claim and authorizes payment by the overseas posts. The embassy notifies the office of each payment, and an accounts receivable and corresponding billing documents are then established in the Central Financial Management System. These actions resulted in collections of over $1 million in fiscal year 1994, including funds owed since 1991. According to a Medical Services official, the collection system applies to State employees only. It does not cover employees of other agencies that may receive medical services overseas. Although State still does not require Foreign Service officers to have private medical insurance before they are assigned overseas, it has stopped paying claims for hospitalization of those without insurance with the exception of the hospital admission charge, which must be promptly reimbursed. In 1993, the Committee called for increased oversight of operations by senior officials both in Washington and at the embassies. State officials acknowledged that a greater emphasis should be placed on management controls, and that commitment and support should come from the top. To enhance senior managers’ commitment at posts, State has introduced a number of actions intended to address the managers’ systemic disregard for sound management practices and establish accountability for carrying out headquarter’s requirements. For example, State now emphasizes the importance of management controls and responsibility for those controls to newly appointed ambassadors during preassignment briefings and in the Secretary’s Chief of Mission Authority Letter. The Chiefs of Mission are required to develop a Mission Program Plan that will form the basis for the missions’ major activities and resource allocations and have the plan approved by the Assistant Secretary of State. They are also required to reduce mission costs whenever possible, implement sound management controls to ensure that government resources are maximized and protected, and certify annually that management controls are adequate. Another action to increase senior-level attention to embassy management included the addition of a management control segment to the training course for new Deputy Chiefs of Mission. This segment defines management controls, emphasizes using the Mission Program Plan, and encourages the use of the risk assessment questionnaire. In addition, the risk assessment questionnaire was revised to include questions covering the minimum controls necessary for facilities maintenance, contracting, and medical reimbursements. These initiatives were inconsistently applied at the posts we visited. However, as discussed below, posts that employed sound management practices had the active involvement of the Deputy Chief of Mission serving as a Chief Operating Officer. Some embassies have implemented practices on their own to improve administrative operations. Practices, such as those we observed in Ankara, Tunis, and Dhaka, could be used by other embassies to strengthen management controls, reduce costs, foster accountability, and increase compliance with applicable regulations. Embassies in Ankara, Tunis, and Dhaka introduced operational improvements to address and correct continuing deficiencies in the areas of property management, training, contract administration, and cashiering. For example, in Tunis and Ankara, setting performance targets for inventory control and accountability resulted in more efficient property utilization and reduced losses from theft. Cross-training programs for Foreign Service nationals within the budget and finance offices in Tunis and Ankara increased their supervisors’ flexibility to fill staffing gaps and enhanced morale among their subordinates. In Tunis and Dhaka, the implementation of internal control checklists for contract administration ensured that their contracting and procurement operations were in compliance with regulations. All three posts have developed systems for tracking and collecting accounts receivables, which resulted in more accountability, cost savings, and reduced vulnerabilities to fraud, waste, and abuse. Table 1 summarizes the initiatives at these posts. We discussed these practices with State Department officials in Washington, D.C., and determined that the initiatives could be used to improve operations at other posts, as applicable. They said that many of these practices could be introduced by the post planning processes and would greatly assist in their efforts to achieve real management reform of embassy operations. As budget uncertainties continue, implementation of these practices could provide overseas managers with more flexibility in managing their operations. These posts had two other practices in common—the direct involvement of senior officials in post’s operations and the use of existing management tools to address deficiencies. These practices could also be replicated at other embassies. At embassies in Ankara, Tunis, and Dhaka, the Deputy Chiefs of Mission and sometimes the Chiefs of Mission are directly involved in embassy administration. The commitment of these officials to management was demonstrated through regularly scheduled meetings to discuss management issues, an open-door policy for the resolution of problems, and daily reviews of management operations. The Deputy Chiefs of Mission served as the Chief Operating Officer at all three missions. These officials emphasize a zero-tolerance policy for inadequate management controls. They use management reviews and performance evaluations to hold section managers accountable for adequate internal controls and corrections of management deficiencies. In addition, the Deputy Chiefs of Mission regularly reinforce the importance of internal controls to administrative staff through counseling, according to embassy officials. Embassy managers stressed the importance of senior management involvement in the management of operations and said senior officials set the tone for how well their administrative staff will manage embassy operations. Reports by State’s OIG have documented the critical link between the emphasis placed on internal controls by senior officials and the attention given to the management issues throughout the embassy. Senior managers at embassies in Ankara, Tunis, and Dhaka have successfully used existing, agencywide reporting requirements to address and correct management deficiencies. These include the Mission Program Plan, risk assessment questionnaire, and certification of internal controls. In 1990, the mission program planning process began. The Mission Program Plan is a long-range planning document that is updated annually to address the objectives of the mission and the resources needed to fulfill those objectives. It addresses all areas of embassy operations, including administrative operations. According to State guidance, the plan should include milestones for critical progress points and completion of action. The plan also has a performance and evaluation component. The Mission Program Plans for the embassies in Ankara, Dhaka, and Tunis all incorporated detailed statements of objectives and responsibilities within the administrative section, which helped management focus attention on identifying problems and developing corrective action plans. For example, in Ankara the Mission Program Plan establishes time frames for the correction of management deficiencies, and identifies offices that are accountable for the corrections. According to officials in the Office of Management and Planning, State is encouraging the posts to use this mechanism to address management weaknesses and increase accountability by tying resource allocations to objectives of the plan (see app.I). While there are few posts that currently do this, our review indicates that using the Mission Program Plan to address deficiencies would be consistent with our recommendation that each post establish a proactive management improvement plan. The risk assessment questionnaire identifies internal control weaknesses. State’s policy requires posts to complete these questionnaires just before an inspection by the OIG, which usually occurs every 4 to 5 years. However, to help ensure adequate internal controls at the posts, State sent a February 1994 cable to all overseas posts that encouraged them to use the risk assessment questionnaire as frequently as local conditions warrant. The embassies at Ankara, Dhaka, and Tunis have used the risk assessment questionnaire at least once a year to assess administrative weaknesses. The questionnaires have provided input for the planning process and served as a foundation for the annual certification of internal controls. These posts also used the questionnaire to link management controls to goals and objectives in the Mission Program Plan. For example, in Ankara, administrative officers developed detailed corrective action plans, including milestones, based on the results of their questionnaires. Officials at these posts agreed that the questionnaire was an excellent management tool for identifying potential problems and that it can be completed with minimal effort. Officials in Washington asserted that all embassies should use the questionnaire on a more frequent basis. Officials in the Office of Finance and Management Policy said they encourage posts to use the questionnaire as a self-assessment management tool and find that posts that are concerned about management use the questionnaire annually, and posts less concerned about management only use the questionnaires prior to an inspection. The Chiefs of Mission are required by the Secretary of State to certify the adequacy of management controls each year. These certifications are to aid the Secretary of State in preparing the annual report required by the Federal Managers’ Financial Integrity Act. The mission chiefs at the embassies in Ankara, Dhaka, and Tunis said they did not sign their certifications until they were sure that spot checks had been conducted to ensure the veracity of the certification. Officials in the other four posts we visited did not use the questionnaire to validate their certifications and their Chiefs of Mission relied solely on their administrative officer’s opinion without conducting spot checks in certifying the posts’ internal controls. We recommend that the Secretary of State expand the operational improvements discussed in this report to a minimum of 50 other embassies on a test basis to help improve operations. If the test demonstrates the applicability of these improvements in a variety of posts, the practices should be further expanded until the maximum benefits are achieved. In commenting on a draft of this report, State Department officials stated that improving the management of its overseas operations was a high priority and that it would like to see the overseas posts use the practices that we identified as a positive management tool in ways that make sense for their particular circumstances and environments. State believes it needs to provide overseas posts with information on the initiatives of other posts, but it does not want to make the implementation of such practices a requirement. We do not believe that relying on voluntary adoption of these practices will produce the maximum benefits. The management deficiencies have existed for decades. However, because our findings were focused on only a few overseas posts, and State points out that overseas posts operate in different environments, we have modified our position from one that would require all posts to immediately implement the recommended improvements. We believe that if State is serious about trying to improve management of its overseas operations, then out of its more than 260 posts, it should be willing to pilot test the recommended actions at a minimum of 50 posts. If the pilot demonstrates the applicability of these improvements in a variety of posts, then State should continue to expand the use of these practices until the maximum number of posts benefit. The Department of State’s comments are presented in their entirety in appendix II along with our evaluation of them. We interviewed State Department officials in Washington, D.C., who are responsible for embassy management oversight, to assess actions taken by State to improve the management of its overseas operations. We analyzed documentation related to embassy management improvements provided by functional managers and documented continuing management deficiencies from State OIG reports. (See app. III for a listing of related GAO and OIG reports.) In addition, we observed good embassy practices that could be used at other embassies. We selected these embassies based on (1) State OIG reports that identified good management practices at these posts and (2) the recommendations of post management officers responsible for embassy oversight. Overall, we reviewed operations at U.S. embassies in Venezuela, Tunisia, France, Portugal, Turkey, Philippines, and Bangladesh. We performed our work from April 1994 to November 1995 in accordance with generally accepted government auditing standards. Please contact me at (202) 512-4128 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix VI. State’s primary means for linking foreign policy objectives and resources is the program planning process. In a November 1994 cable to all diplomatic and consular posts, the Under Secretary for Management informed the Chiefs of Mission that the link between resources and the mission program planning process was missing; consequently, budget reductions were enacted without thought to the future. The Under Secretary instructed mission management to develop a mission program plan that reflects mission priorities in both policy and management areas, actively involves all mission elements in its preparation, and serves as an instrument for continuous management improvement. State guidance to embassies for preparing the Mission Program Plans (MPP) for fiscal years 1995 through 1999 attempts to build on previous planning efforts and encourages posts to embrace MPP as a management creed of continuous improvement to support the Department’s goal of building an efficient organization. This guidance directs embassies to use MPP as a tool to measure progress in achieving mission objectives, including examining innovative and lower cost ways to deliver administrative support. More importantly, State guidance instructs embassy managers to document how they will address material weakness in administrative areas when reducing administrative staff. The structure of MPP supports a proactive management improvement effort. MPP has an administrative section that reviews financial management, cashiering, procurement, supplies, and warehousing. In addition, MPP has a status of progress section that tracks progress on administrative and other mission goals. The Under Secretary for Management’s guidance encourages senior managers to personally assist in the preparation and implementation of the plan by (1) objectively measuring or validating results and adjusting performance through a regular, systematic process; (2) providing personal leadership and involvement; and (3) holding others accountable on a regular basis. Senior officials are also directed to establish incentives to help institutionalize the use of MPPs. To fully achieve these goals, recent headquarters actions have attempted to link embassy staff work requirements to mission program plans. One of these actions was to require that objectives of MPP be reflected in work requirements statements so that performance can be linked to the successful achievement of MPP goals. Assistant secretaries are also instructed to evaluate the performance of Chiefs of Missions based on the successful achievement of MPP objectives and their diligence in evaluating subordinates’ performance against MPP objectives. To assist posts in using MPP to manage resources, the Under Secretary for Management issued 5-year staffing and funding levels for each geographic bureau. Bureaus use MPPs to review current resource deployments against policy priorities and determine the optimal match of resources and post needs. Both the Bureau of Diplomatic Security and the Bureau of International Organization Affairs have established exemplary bureau planning processes. The Bureau of Diplomatic Security initiated an operational planning system in fiscal year 1987 to establish specific goals and monitor progress in security programs receiving funds from the Supplementary Diplomatic Security budget. This effort has become known as the Milestone Program. The program, which is administrated by the Bureau’s Office of Policy, Planning, and Budget, expanded in fiscal year 1988 to include all bureau programs. The Milestone Program applies management-by-objectives criteria to the security programs managed by the Bureau. Elements of the program include: meeting monthly to discuss program performance, problems, and modifications and revise milestones for the next cycle; tracking activities to specific program objectives; establishing performance measurements to keep programs in compliance; tying financial information to program milestones and continually analyzing ways to contain costs and streamline activities; and fully integrating the Bureau’s planning process with its milestones. Likewise, the Bureau of International Organization Affairs’ Internal Controls Plan uses a management-by-objective process that links foreign policy and management priorities to resource allocations. According to Bureau officials, this plan allows the Bureau to identify internal control weaknesses and better allocate resources. Program planning officials believe elements of these programs can significantly improve planning efforts at other bureaus. The following are GAO’s comments on the Department of State’s letter dated November 8, 1995. 1.We have modified our report by stating that branch officials said that 14 of the 20 posts visited failed to provide documentation that physical inventories were conducted. We also footnoted that 12 of the posts subsequently submitted the required certification at a later date. 2.We have modified the report in line with the comment. 3.We agree that a single automated system for processing travel vouchers is needed. However, replication of individual post systems that work could be beneficial to other posts until State is able to implement a uniform system for vouchers processed overseas. 4.Standardized procedures for tracking accounts receivable and other collections have long been needed in State. However, we believe that until standard procedures are implemented, application of automated systems used at individual posts would prove useful. 5.We did not recommend that State centrally develop manuals for all posts. However, State’s endorsement of standard operating procedures manuals for each post could encourage individual posts to develop manuals consistent with their individual needs and conditions. 6.Although State described this practice as a standard procedure, our review indicated that only a few posts were actually performing this internal control procedure. 7.Completion of the risk assessment questionnaire annually by the posts would optimize the use of this document, which has been endorsed by the State Department as an excellent management tool. We do not believe that it is necessary for Washington to score and evaluate the questionnaires on an annual basis. Instead, the posts could use and score their own questionnaires for self-assessment purposes during the annual certification process. 8.The Secretary of State’s endorsement of the use of best management practices throughout State’s overseas system, where applicable, would help demonstrate a commitment from the top to improve management at the overseas posts. It would also encourage the use of best practices, such as automated travel voucher and accounts receivable tracking system, on a greater scale until agencywide systems are available. Internal Controls: State’s Controls Over Personal Property Management Are Inadequate (GAO/NSIAD-87-156, June 10, 1987). Embassy Contracting: State Department Efforts to Terminate Employee Association Contracts (GAO/NSIAD-88-85, Feb. 16, 1988). Overseas Support: Current U.S. Administrative Support System Is Too Complicated (GAO/NSIAD-88-84, Mar. 25, 1988). State Department: Status of Actions to Improve Overseas Procurement (GAO/NSIAD-92-24, Oct. 25, 1991). State Department: Need to Ensure Recovery of Overseas Medical Expenses (GAO/NSIAD-92-277, Aug. 7, 1992). Financial Management: Serious Deficiencies in State’s Financial Systems Require Sustained Attention (GAO/AFMD-93-9 Nov. 13, 1992). High-Risk Series: Management of Overseas Real Property (GAO/HR-93-15, Dec. 1992). State Department: Management Weaknesses at the U.S. Embassy in Mexico City, Mexico (GAO/NSIAD-93-88, Feb. 8, 1993). State Department: Management Weaknesses at the U.S. Embassies in Panama, Barbados, and Grenada (GAO/NSIAD-93-190, July 9, 1993). State Department: Survey of Administrative Issues Affecting Embassies (GAO/NSIAD-93-218, July 12, 1993). State Department: Widespread Management Weaknesses at Overseas Embassies (GAO/T-NSIAD-93-17, July 13, 1993). Financial Management: State’s Systems Planning Needs to Focus on Correcting Long-Standing Problems (GAO/AIMD-94-141, Aug. 12, 1994). State Department: Additional Actions Needed to Improve Overseas Real Property Management (GAO/NSIAD-95-128, May 15, 1995). Financial Management Overseas, State Department Inspector General Report (O-FM-008, Jan. 15, 1990). Overseas Foreign Affairs Administrative Support Costs, State Department Inspector General Report (1-FM-005, Dec. 20, 1990). Overseas Procurement Programs, State Department Inspector General Report (1-PP-004, Jan. 29, 1991). Management Improvements in Embassy Cairo’s Administrative Operations, State Department Inspector General Report (3-FM-003, Jan. 12, 1993). Report of Inspection, Embassy Paris, France (ISP/I-93-10, Mar. 1993). Buildings Overseas-Maintenance and Repair, State Department Inspector General Report (3-PP-014, Sept. 14, 1993). Report of Inspection, Embassy Ankara, Turkey and its Constituent Posts (ISP/I-94-02, Oct. 1993). Recovery of Overseas Medical Expenses, State Department Inspector General Report (4-SP-003, Feb. 9, 1994). Report of Inspection, Embassy Tunis, Tunisia (ISP/I-94-20, Mar. 1994). Management of Overseas Travel Services, State Department Inspector General Report (4-SP-009, Feb. 22, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of State's efforts to improve the management of its embassies, focusing on whether State has responded to previous recommendations concerning embassy management. GAO found that: (1) State has not responded to the recommendation that it establish proactive management improvement programs at its overseas posts because it believes that its approach of targeting specific areas for improvement is more appropriate and achieves comparable longterm results; (2) State has initiated action on some congressional recommendations to improve its embassy management, but deficiencies continue in controls over personal property, training for U.S. and foreign service personnel, contracting and procurement practices, controls over cashiering functions and medical insurance reimbursements, and senior-level oversight; (3) the embassies in Turkey, Bangladesh, and Tunisia have initiated management practices, such as tracking accounts receivable, automating travel vouchers, strengthening internal controls, improving regulation compliance, reducing costs, and enhancing efficiency and effectiveness; and (4) embassy senior managers participate in the day to day operations of their posts and use existing reporting requirements to document administrative problems and decide on appropriate corrective actions.
In 1986, Congress amended Title IV-E of the Social Security Act to authorize federal funds targeted to assist youth aged 16 and over in making the transition from foster care to living independent of the child welfare system and created the Independent Living Program (ILP). This program was designed to prepare adolescents in foster care to live self-sufficiently once they exited the child welfare system. As with many adolescents, foster care youth face a time of uncertainty and change as they approach age 18. However, research suggests that they may be at greater risk of experiencing negative consequences in adulthood, such as unemployment, incarceration, or poor health outcomes. For example, research indicates that 30 to 40 percent of youth in foster care are affected by chronic medical problems, but barriers exist to meeting those needs, such as prolonged delays in getting referrals to specialists. In addition, research shows that children and youth in foster care have poorer academic experiences than do their peers not in care. For example, twice as many youth in foster care than those not in foster care had repeated a grade, changed schools during the year, or enrolled in a special education program. Several amendments were made to the Independent Living Program over the years, but the passage of FCIA and the creation of the Chafee Program represented the most significant changes in the federal independent living program since its creation. FCIA doubled the federal funds available for independent living programs to $140 million each year. These funds are allocated to states based on their share of the nation’s foster care population. In addition to providing increased funding, FCIA eliminated the minimum age limit of 16 years and provided states with the flexibility to define the age at which children in foster care are eligible for services to help them prepare for independent living, as long as services are provided to youth who are likely to remain in foster care until 18 years of age. The law provided states the flexibility to develop programs that met the needs of the adolescents in their care, as long as states designed and conducted their programs based on the five key purposes outlined in the law (see table 1). The law also provided several new services to help youth make the transition to adulthood. First, it allowed states to use up to 30 percent of their state allotment for room and board for former foster care youth up to age 21. Second, it allowed states the option to expand Medicaid coverage to former foster care adolescents between 18 and 21. Title IV-E was amended again in 2002 to provide foster youth vouchers for postsecondary education and training under the Education and Training Vouchers (ETV) program and authorized an additional $60 million for states to provide postsecondary education and training vouchers up to $5,000 per year per youth. Eligible participants include youth otherwise eligible for services under the states’ Chafee Programs, youth adopted from foster care after attaining the age of 16, and youth participating in the voucher program on their 21st birthday (until they turn 23 years old) as long as they are enrolled in a postsecondary education or training program and are making satisfactory progress toward completion of that program. In federal fiscal year 2003, approximately $41 million in federal funds was available for states’ ETV programs. The amount increased slightly in federal fiscal year 2004 to approximately $44 million. In addition, the law required that states make every effort to coordinate their Chafee Programs with other federal and state programs for youth, such as the Runaway and Homeless Youth Program, abstinence education programs, local housing programs, programs for disabled youth, and school-to-work programs offered by high schools or local workforce agencies. Further, states were required to coordinate their programs with each Indian tribe in the state and offer the state’s independent living services to Indian children. To receive funds under the Chafee Program, states were required to develop multiyear plans describing how they would design and deliver programs in accordance with FCIA and to submit program certifications. The multiyear Chafee plans must include a description of the state’s program design, including its goals, strategies, and its implementation plan for achieving the five key purposes detailed in the law. States were also required to certify that they would operate a statewide independent living program that complied with the specific aspects of the law, such as providing training to help foster parents, adoptive parents, workers in group homes, and case managers understand and address the issues confronting adolescents preparing for independent living. Further, to receive annual funds, ACF required states to submit annual reports that described the services provided and activities conducted under their Chafee Programs, including information on any program modifications and their current status of implementation; provide a record of how funds were expended; and include a description of the extent to which the funds assisted youth age 18 to 21 in making the transition to self-sufficiency. FCIA required that HHS develop and implement a plan to collect information needed to effectively monitor and measure a state’s performance, including the characteristics of youth served by independent living programs, the services delivered, and the outcomes achieved. Further, FCIA required HHS to conduct evaluations of independent living programs deemed to be innovative or of potential national significance using rigorous scientific standards to the maximum extent practicable, such as random assignment to treatment and control groups. Currently, ACF’s 10 regional offices conduct much of the federal oversight for the Chafee Program. They hold the responsibility for reviewing and approving the state plans, certifications, and annual reports. In addition, the regional offices provide assistance and guidance to the states on implementing and operating their programs. Technical assistance is also available to states from 10 national resource centers. In particular, the National Resource Center for Youth Development (NRCYD) provides states and tribes assistance with helping youth in care establish permanent connections and achieve successful transitions to adulthood. Upon request from the states and approval from the regional offices, the NRCYD has facilitated stakeholder meetings to bring together officials from various state and federal programs within a state to facilitate communication, awareness, and information sharing, and provide strategies to promote long-term collaborative efforts around independent living. In 2001, ACF implemented an outcome-oriented process, known as the Child and Family Services Review (CFSR), in part, to determine states’ substantial conformity with Title IV-E provisions and hold states accountable for improving child welfare outcomes. The CFSR measures state performance on 45 performance items, which correspond to 7 outcomes and 7 systemic factors. States that were reviewed during the first year of the CFSR were rated on the provision of independent living services to youth in their care 16 years or older. This item was removed from the CFSR in subsequent reviews when ACF redesigned the review instrument to focus more on setting and achieving appropriate permanency goals for children and less on service delivery. With the redesigned instrument, reviewers were instructed to consider the provision of independent living services in other measures, such as when determining if the youths’ needs were assessed and if appropriate services were provided. While overall federal funding for state independent living programs doubled with the passage of FCIA, there were significant variations in the changes to state allocations, and some states had difficulty expanding their programs quickly enough to spend all of the new funds. Prior to the passage of FCIA, states were awarded independent living funds based on the number of children receiving federal foster care payments in 1984. The new law updated the formula, which generally allocates funds to each state based on the state’s proportion of the nation’s population of children in foster care—regardless if the child is receiving federal or state funded foster care payments. In addition, the new formula includes a hold- harmless provision to ensure that each state will receive at least the amount it received in federal fiscal year 1998 or $500,000, whichever is greater. Once the states subject to the hold-harmless provision are funded, the remaining funds within the cap of $140 million are allotted to the other states. Under the previous independent living program, states received funds ranging from $13,000 in Alaska to more than $12 million in California. In the first year of funding under FCIA, Alaska and eight other states received the minimum of $500,000, while California received more than $27 million (see table 2). In most cases, states received an increase of funds. However, the District of Columbia, Louisiana, and New Jersey received no additional funds the first year because their allocations under the new formula were initially lower than the amounts they received in 1998. Therefore, because of the hold-harmless clause, they received the same amount awarded in 1998. Some states were unable to spend all of their federal allocations in the first 2 years of increased funding under the program. In 2001, 20 states returned nearly $10 million in federal funding to HHS, and in 2002, 13 states returned more than $4 million. ACF regional officials reported that one reason for these unspent funds was that some states did not initially have the infrastructure in place to quickly absorb the influx of funds. Texas, for example, planned to use some of its $2.75 million in additional funds to develop services for youth in rural areas, but state officials said that the process of identifying and issuing contracts to service providers was lengthy and initially hampered by the need to identify service providers who were able to provide matching contributions required to receive federal funds under FCIA. As a result, over $500,000 of the state’s total $4.6 million allocation went unspent in federal fiscal year 2001. We could not determine the amount of FCIA funding states had available to spend on each youth eligible for independent living services because of the lack of data on eligible youth emancipated from foster care. However, available data on youth in foster care suggest that states may have different amounts of funds available for services to youth in foster care. We compared each state’s 2004 FCIA allocation with its 2002 population of eligible youth in foster care. This comparison showed that funding for independent living services ranged from $476 per foster youth in West Virginia to almost $2,300 per youth in Montana, as shown in figure 1. These differences were due in part to the new provision in FCIA that allowed states to define the age ranges within which youth were eligible for independent living services. For example, 4 states reported in our survey offering independent living services to youth at age 12, while 27 states reported offering services at age 14. In addition, the funding formula is based on the total number of all children in foster care. However, some states have a larger share of youth eligible for independent living services than other states. For example, of the 15 states reporting in our survey that youth are eligible for services between the ages of 14 and 21, 3 states had 25 percent or less of their foster care population within this age range, while in 3 other states, this age range accounted for over 40 percent of the total foster care population. Following the passage of FCIA, many states reported expanding eligibility for independent living services to younger and older youth and to provide new services, such as Medicaid health insurance, to youth who had already left the foster care system. Further, the states we visited reported using the new funds to improve the quality of existing independent living services, refocus the attention of their programs, or develop new services to assist youth of all ages in independent living programs. However, states varied in the proportion of eligible youth served. For example, 40 states providing these data in our survey reported serving between 10 and 100 percent of foster care youth eligible for independent living services in 2003. A number of factors may have contributed to these differences, including gaps in the availability of critical services, such as mental health services, mentoring, and housing, as well as challenges in engaging youth and foster parents to participate in the program. After the passage of FCIA, 40 states reported in our survey expanding services to youth younger than they had previously served, and 36 states reported serving older youth, and the states we visited reported improving service quality. While some states had been using nonfederal funds to provide services to youth in these broader age groups, the number of states that reported providing core independent living services, such as independent living skills assessments, daily living skills training, and counseling, to youth younger than 16 more than doubled after FCIA. Similarly, more states reported offering these supports and services to youth who were emancipated from foster care after the passage of FCIA (see fig. 2). Many states also began to offer the new services under FCIA that would allow them to meet the unique needs of youth that emancipated from foster care. These services include the Education and Training Vouchers, Medicaid health insurance, and assistance with room and board. All states, the District of Columbia, and Puerto Rico were allocated funds under the ETV program to assist youth seeking postsecondary education. The 4 states we visited had started to implement their ETV programs at the time of our site visits and had plans to use the funds in different ways. Texas officials said that youth would be able to use ETV funds for educational expenses, housing, food, clothing, or day care, so that the funds would provide relief for youth who want to continue their education but are concerned about paying bills while attending postsecondary school full-time. Connecticut officials said they would use ETV funds to provide computers to youth in postsecondary education and training programs and to establish an additional liaison between the independent living program and the Job Corps program. Florida officials said they would use ETV funds for educational expenses for youth receiving the state’s independent living scholarship. Washington plans to use ETV funds to expand and enhance service delivery for education and training, and service providers will be evaluated on their success in helping youth reach desired educational outcomes. Of the 50 states responding to our 2004 survey, 31 reported offering Medicaid benefits to at least some emancipated youth to help them maintain access to health care benefits while they transitioned to independence (see fig. 3). Some states may limit coverage to specific subpopulations of emancipated youth. For example, according to officials in Florida, the state limits Medicaid access to emancipated youth who meet minimum academic requirements to qualify for the state’s independent living scholarship program. In our 2004 survey, 46 states reported that they offered assistance with room and board to youth that had been emancipated from foster care, and the states we visited reported offering a range of housing supports to assist youth. Connecticut provided several housing options to meet the needs of youth at varying levels of independence, including group homes, supervised apartment sites, and unsupervised apartment sites with periodic visits from case managers. While other states we visited offered a more limited supply of housing options, all provided some type of housing subsidy or placement. For example, Texas and Washington provided youth with a monthly stipend for rent as well as a onetime stipend for household supplies. Chafee Program funds were also used to improve the quality of existing independent living services, refocus the attention of their programs, or develop new services to assist youth of all ages in independent living programs, according to state officials we visited. Local officials in Florida said that prior to FCIA, training in daily living skills was provided haphazardly, and in many cases unqualified staff taught classes even though such training was considered a core component of their independent living program. After FCIA, Florida officials said that the state redesigned staff training, improved instructor quality, and was better prepared to provide youth with the skills necessary to live independently outside of the foster care system. In Texas, a service provider reported that FCIA encouraged the state to incorporate more experiential learning opportunities in the daily living skills curriculum. For example, the curriculum in one locality included taking the youth on a shopping trip to the grocery store while working within a set budget. Similarly, in one local area in Florida, youth in the independent living program described a scavenger hunt in which they were required to take public transportation around the city and conduct certain activities that emphasized their daily living skills training, such as going to the bank and opening a checking account. Washington officials reported that FCIA was instrumental in shifting the emphasis of the state’s independent living program to focus on educational achievement, and some regions in the state developed summer enrichment programs to provide youth with year-round opportunities to keep up with their peers academically or to further their educational development. Officials in Connecticut reported using additional funds to develop mentoring programs and to establish adolescent specialist positions in each local child welfare office. States differed in the proportion of eligible youth served under their respective independent living programs, and officials in the 4 states we visited reported gaps in providing critical services, as well as challenges in engaging youth and parents in the services offered. Complete data that show how many youth states are serving through the independent living programs are not available, and while these programs serve both youth in foster care and emancipated youth, data we were able to collect from some states were limited to youth in care. Data from our 2004 state survey showed that 40 states responding to our survey reported serving about 56,000 youth—or approximately 44 percent of youth in foster care who were eligible for independent living services in these states. However, there were substantial differences among states in the proportion of youth served, ranging from a low of 10 percent up to 100 percent of the state’s eligible foster care population. As shown in figure 4, about one-third of reporting states were serving less than half of their eligible foster care youth population, while an equal percentage of states were serving three- fourths or more. The extent to which these differences were related to whether the states served higher or lower numbers of youth who emancipated from foster care is unknown. While states expanded eligibility to younger youth, most services continued to be directed at youth age 16 and older in most of the states we visited. For example, Texas officials told us that although the state lowered the age that youth are eligible for independent living services to 14 years, serving youth age 16 years and older is the highest priority, and serving younger youth within the various regions is dependent on available funding. In addition, while Washington expanded eligibility to serve youth as young as 13 years, state officials reported that the state has yet to develop a contract for providers to offer services to youth 13 to 15 years old and few regions have developed services for youth in this age range. Youth in foster care often require mental health services continuing beyond emancipation, but 3 states we visited cited challenges in providing youth with a smooth transition between the youth and adult mental health systems. Officials in Connecticut reported that it is critical for youth to receive mental health services because mental well-being affects every aspect of the youths’ lives, including learning life skills, locating and maintaining employment, succeeding in school, and the ability to transition to a more independent setting. However, state officials reported that youth who did not qualify for the adult mental health system were left without critical services. In Florida, many individuals who had been served by the youth mental health system did not qualify for adult services because of more stringent eligibility requirements, possibly losing access to important treatments and therapies. In Washington, caseworkers reported that the adult mental health system did not provide the same level of services as the youth system, and long waiting lists sometimes prevented youth from accessing critical services. Research studies indicate that the presence of positive adult role models is critical for youth in foster care because family separations and placement disruptions have been found to hinder the development of enduring bonds, but officials in the states we visited cited challenges in providing all youth with access to mentoring programs to establish and maintain such relationships. Although the majority of states reported in our 2004 survey that they offered mentoring programs to youth, officials in Texas and Florida reported that formal mentoring programs were not available throughout the state. Connecticut officials said they used FCIA funding to develop a statewide mentoring program, but the state is still working to expand program availability to youth in all regions. In addition, one program director reported challenges recruiting adults to serve as mentors, especially men willing to make a 1-year commitment to an adolescent boy. Some state and local officials and service providers seemed unclear on what should be included in a quality mentoring program and how to identify qualified service providers. For example, a nonprofit service provider delivering independent living services in an urban county in Washington reported being unfamiliar with how a mentoring program should be run and said that guidance on how to identify and train mentors would be helpful. Youth we spoke with across the 4 states we visited said that locating safe and stable housing after leaving foster care was one of their primary concerns in their transition to independence, but state officials reported challenges meeting youths’ housing needs. Youth reported difficulties renting housing because of a lack of an employment history, a credit history, or a co-signer. State and local officials in the states we visited said the availability of housing resources for foster youth during their initial transition from foster care depended on where they lived, and in some cases the benefits provided did not completely meet the needs of youth, or were available only to certain youth. For example, in Washington, local officials reported that housing subsidies may not completely offset expenses for youth in expensive urban areas like Seattle and that rental housing in some rural areas is scarce. In Florida, youth must be full-time students to receive full housing benefits. In addition to reporting these service gaps, youth said during our site visits that living in rural areas made it difficult to access independent living services, and state and local officials concurred that services for youth in rural areas were sometimes limited. Difficulties are often related to a lack of transportation or providers willing to provide services in remote areas. In large states such as Texas and Florida, the long distances separating some youth from available service providers made it difficult for youth to access services on a regular basis. Additional challenges include the lack of employment opportunities and transitional housing for youth living in some regions. Finally, state and local officials, as well as service providers in the 4 states we visited said that it was difficult to get some youth to participate in the independent living programs and that foster parents were sometimes reluctant partners. While youth were generally offered incentives, such as cash stipends, to participate in daily living skills training or other activities, officials emphasized that participation is voluntary and it is critical for foster parents to support and encourage youth participation in the program. Florida and Washington officials said that some foster parents were reluctant to transport youth to classes or meetings because of scheduling conflicts or long distances from training locations and did not always reinforce classroom training in daily living skills by allowing youth to practice skills such as cooking or financial management. FCIA emphasized the need to provide training to help foster parents understand and address the issues confronting adolescents preparing for independence, and nearly all states reported in our 2004 survey that they provided some training to foster parents in this regard. However, the number of parents trained differed across 34 states reporting data—while some states reported training 1,100 or more, others reported training as few as 24. After FCIA, 49 states reported increased coordination with a number of federal, state, and local programs that can provide or supplement independent living services, but officials from the 4 states we visited reported several barriers in developing the linkages necessary to access services under these programs across local areas. These barriers include a lack of information on the array of programs available in each state and in local areas, as well as differences in performance measures between programs. Many child welfare caseworkers, foster parents, and youth we spoke with during our site visits were unaware of the full array of youth and adult support services available to youth while in foster care and after emancipation. Federal, state, and local agencies oversee a wide range of programs providing services that may assist youth in their transition to adult life and that include current and former foster youth among their target populations. In our 2004 survey, 49 states reported increased coordination with federal, state, and local agencies and private organizations that provide services to youth since the passage of FCIA to provide a wide variety of services. Table 3 displays selected key independent living services and the most prevalent service providers. States we visited used different strategies to develop linkages among state youth programs. Three of the states we visited reported establishing state- level work groups that included representatives from the independent living program and other state agencies to bring agency officials together to discuss the needs of youth in foster care and possible strategies for improving service delivery. For example, Florida’s legislature mandated a state-level work group to facilitate information sharing at the state level among various agencies, such as the State Departments of Children and Families and Education, the Agency for Workforce Innovation, and the Agency for Health Care Administration. Texas was developing a strategy to redesign the provision of social services in the state, including services to youth in the independent living program. The goals of this effort included establishing a local, cross-system network composed of youth in foster care, emancipated youth, caregivers, and professionals to facilitate linkages between stakeholders and improve the delivery of services to youth transitioning out of foster care. Additional strategies states developed to establish linkages with other federal, state, or local programs included establishing liaisons between agencies or programs or through less formal collaborative arrangements. In Connecticut, the child welfare agency established a liaison position with the Job Corps program to meet with foster care youth to determine whether they were appropriate candidates for the program, and to monitor their progress, address any obstacles or concerns, and help youth plan for the future. In addition, a liaison between the independent living program and Connecticut’s mental health agency assists youth in their transition to the adult mental health system to ensure that youth who need the continued support maintain access to medication and services after youth leave the foster care system. In local areas in Texas and Florida, child welfare officials worked with local housing authorities to assist youth in accessing federal housing vouchers provided by HUD. For example, in Tallahassee, Florida, the local housing authority secured 30 of 100 available housing vouchers for youth emancipating from foster care and established a case manager position especially for the youth in the program. In Florida, the independent living program officials worked with the state’s youth mental health department to access the Assertive Community Treatment for Teens Program. The program consisted of community-based teams—nurses, job developers, housing and education officials, and other relevant stakeholders—who worked together to develop integrated service plans for youth with serious and persistent mental illness. In addition, officials reported developing linkages with other private resources in their communities, such as business owners, to provide services to youth in the independent living program. Connecticut independent living officials collaborated with business owners, nonprofit organizations, and other state agencies to develop an experiential employment training program that gave youth 16 and older the opportunity to learn skills through participation in workshops covering all aspects of a local business. For example, some youth worked in a boat- building business and learned skills ranging from carpentry and construction to sales and financial management. In one Florida county, independent living staff utilized a community resource know as the speakers’ bureau—a service that links members of the community with youth to talk about a wide range of professions and activities. Caseworkers said as youth moved through their daily living skills curriculum they were asked to decide whom they wanted as guest speakers. In Texas, the child welfare agency worked with the Orphan Foundation of America—a nonprofit organization—so youth could access a Web-based mentoring program. Youth participating in the program were matched with online mentors based on mutual interests, and they communicated regularly via e-mail or by phone. While table 3 shows that states are using a wide variety of programs to provide independent living services, officials in the 4 states we visited reported several barriers that hinder their ability to establish linkages with other agencies and programs, including the lack of information on the array of programs available in each state or local area and differences in program priorities. Officials from 3 states we visited said that they relied on local officials to identify potential partners and initiate and maintain coordination efforts; and while individuals in some local areas may have developed successful collaborations with service providers in their area, these relationships have not always been expanded statewide. To some extent, this has been due to the fact that state and local child welfare officials differ in their awareness of resources available from various federal and state agencies. Local officials in one area of Florida were working with a U.S. Department of Labor workforce program, while officials in another local area of the state were not familiar with this program to train and find employment for youth. In one local area in Washington, independent living coordinators and caseworkers expressed concern about access to affordable health care for youth emancipating from foster care and were not aware of a federal health center located nearby that was required to provide medical and mental health services on a sliding fee scale. These gaps in awareness may be partly due to turnover rates for caseworkers reported by the states we visited. Caseworkers’ lack of knowledge about available programs may have contributed to foster parents and youth reporting that they were unaware of the array of services available from other federal, state, or local programs. Officials in the independent living programs in the states we visited also cited barriers to establishing linkages with other federal and state programs because of different program priorities. Difference in performance goals among programs can affect the ability of independent living staff to obtain services for foster youth from other agencies. For example, child welfare and workforce officials in Florida reported that performance goals for workforce programs may act as barriers to serving youth in the child welfare system who may be more difficult to place in employment and might not maintain the jobs once placed, potentially bringing down workforce program performance measures. As a result, the officials reported that local workforce programs may target those individuals with whom they can most easily achieve successful outcomes and foster youth may be unable to access services they need to achieve positive employment outcomes. According to independent living service providers in one local area in Washington, privacy concerns were a barrier to developing linkages with education programs. For example, schools in one area we visited would not allow anyone besides biological parents—including caseworkers and foster parents—access to youths’ individualized education programs. Yet according to caseworkers, foster parents, and service providers, lack of access to these plans made it difficult to align the individualized education programs with the youths’ independent living plans. All states developed multiyear plans as required under FCIA and submitted annual progress reports to ACF for their independent living programs, but the absence of standard comprehensive information within and across state plans and reports precludes using them at the state and federal level to monitor how well the programs are working to serve foster youth. HHS has not yet implemented its plan to collect information to measure states’ program performance, and while some states reported collecting some data, states have experienced difficulties in contacting youth to determine their outcomes. HHS has begun to evaluate selected independent living programs, and officials reported that the results of this evaluation should be available in 2007. All states developed state plans as required by FCIA that described independent living services they planned to provide to foster youth and submitted annual reports to ACF, but for several reasons, these plans and reports cannot be used to assess states’ independent living programs. To assist states in preparing these documents, ACF developed guidance that set out broad expectations for the plans and reports that would meet the FCIA requirements. However, while ACF officials stated that the plans and annual reports served as the primary method the agency used to monitor states’ use of the Chafee Program funds, ACF did not require states to use a uniform reporting format, set specific baselines for measuring progress, or report on youths’ outcomes. As a result, each state developed plans and reports that varied in their scope and level of detail, making it difficult to determine whether states had made progress in preparing foster youth to live self-sufficiently. Our review of plans from 51 states covering federal fiscal years 2001 through 2004, and annual reports for 45 states from federal fiscal years 2001 and 2002 showed that Few states both organized the information in their plans to address the purposes of FCIA and presented specific strategies they would use to meet these purposes. For example, Nebraska’s plan was aligned according to the five purposes of FCIA, but when describing how the state would help youth receive the education, training, and services necessary to obtain employment, the plan provided only a broad statement about the collaborative efforts between state agencies without mention of specific strategies to deliver the services. In contrast, New Hampshire submitted a comprehensive state plan that described the state’s holistic approach to providing services to youth transitioning out of care, such as specialized trainings for youth, foster parents, and independent living staff; programs offered through community resources; and resources available for youth with emotional and physical challenges, but these services were not attached to any one purpose of FCIA. The plans vary in their usefulness in establishing outcomes the states intended to achieve for youth. For example, the District of Columbia indicated that it will use Chafee Program funds to establish a computer lab where current and former foster care youth can search for jobs, but the plan does not establish any outcomes the District hoped to achieve with this service, such as the percentage of youth that find employment over a period of time. In contrast, the Nevada plan identified 2001 and 2002 as the baseline years for the number of foster youth who graduate or receive a general equivalency diploma (GED) and planned to increase by 3 percent each year the number of youth who receive a high school diploma or GED until the youth are within the overall state average. Annual reports for all 45 states contained information that did not directly relate to information in their state plan making it unclear whether the differences were due to service changes or missing information. For example, in Hawaii’s plan, the state listed several services and supports provided to youth, including employment services, such as career exploration and job placement and retention. However, in each of the annual reports, the state does not mention offering or providing any employment-related services. Of the 90 annual progress reports we reviewed, 52 reports did not include clear data that could be used to determine progress toward meeting the goals of the states’ independent living program. For example, Arkansas’ report for federal fiscal year 2002 listed several workshops provided to youth, such as money management and college preparation, and a count of the number of youth who participated in the workshops. In contrast, Nevada consistently reported data on youths’ participation in different independent living activities, including the changes between each year, progress towards meeting the goals established in their plan, and reasons for not yet meeting the goals. ACF officials said that they recognize the limitations of these documents as tools to monitor states’ use of independent living program funds, but explained that they rely on states’ to self-certify that their independent living programs adhere to FCIA requirements. Staff in ACF’s 10 regional offices conduct direct oversight of the program by reviewing the multiyear plans and annual reports, interpreting program guidance, and communicating with states when clarification about their plans or reports is needed. However, officials in three offices said that their review of the documents was cursory and that the plans and annual reports do not serve as effective monitoring tools. Only three regions reported that they conducted site visits to observe independent living programs in at least some states in their regions. The other regions reported that they do not have the funds to travel or when they do, the review is focused on other programs or planning efforts. One region commented that even if it had funds, ACF had not developed a standard mechanism for regional offices to use in monitoring states’ use of FCIA funds. Alternatively, ACF officials reported that the Child and Family Services Review used to evaluate the states’ overall child welfare systems could serve as a tool to monitor independent living programs, but the CFSR is limited in the type and amount of data collected on youth receiving independent living services. While states evaluated under the first year of the CFSR were rated on the provision of independent living services to youth in care, this item was removed in subsequent reviews. ACF redesigned its review instrument with the intent of focusing on setting and achieving appropriate permanency goals for children rather than evaluating specific services. Despite the fact that independent living services are no longer a specific focus of the CFSR, ACF officials believe that two broader measures used in the review will provide opportunities to evaluate states’ performance in assisting youth: the measurement of the stability of foster care placements and the review of permanency goals of other planned permanent living arrangements, such as the goal of emancipation. However, some regional officials performing these reviews reported that the on-site portion of the CFSR is limited in scope and does not present an opportunity to determine if states are delivering independent living services to youth and if youth receiving such services are achieving better outcomes than their peers. Further, the CFSR includes a review of a small number of foster care case files and does not include a review of emancipated youth. ACF has not completed efforts to develop a plan to collect data on youths’ characteristics, services, and outcomes in response to the FCIA requirement, and some states that are attempting to collect information on youths’ outcomes are experiencing difficulties. In 2000, ACF started to develop the National Youth in Transition Database (NYTD) to collect information needed to effectively monitor and measure states’ performance in operating independent living programs. However, HHS officials stated that as of August 2004, implementation had not yet occurred. The agency has completed many of the steps laid out in its original plan dated September 2001, including consulting with child welfare and information technology professionals, developing a set of preliminary data elements and outcomes measures, and pilot testing the data collection instruments with 7 states. However, HHS reported that it had not taken the next step of publishing the notice for proposed rule making because the agency decided to develop regulations for the data collection system in order to fulfill a statutory requirement to assess penalties on states for noncompliance. As a result, the proposal has been under internal review since the conclusion of the pilot test process in November 2001. HHS reported that it expects to issue guidance in the form of a proposed regulation in 2005. Officials in all the states we visited supported the idea of having data on former foster youth, and 26 states reported in our 2004 survey that they have begun to plan for the impending data reporting requirements despite the federal delays. Many states reported in our 2004 survey that they already collect many of the data elements HHS had developed as part of the consultation and pilot testing process (see table 4). In addition, some states are attempting to collect outcome information on former foster care youth but have experienced difficulties. According to results from our survey, in federal fiscal year 2003, 30 states attempted to contact youth who had been emancipated from foster care for initial information to determine their status, including education and employment outcomes. Of those states, most reported that they were unsuccessful in contacting more than half of the youth. Further, 21 states reported attempting to follow up with emancipated youth after a longer period of time had elapsed but had trouble reaching all the youth. Officials in the states we visited reported that collecting outcome data is especially challenging since there is little they can do to find youth unless the youth themselves initiate the contact. Further, some officials were concerned about the value of the outcome data since they believe that youth who are doing well are more likely to participate in the follow-up interviews, thus skewing the results. Similarly, an ACF regional official reported that the value of the NYTD would be determined by the resources available to states to support the follow-up component. Some state officials, caseworkers, and youth we interviewed suggested strategies states may need to develop to maintain contact with former foster care youth, including offering incentives to the youth to stay in touch; establishing a toll-free telephone line that will make the process of staying in touch as easy as possible; or using other resources that may help locate the youth or provide the necessary data, such as other service providers or other social services information systems. By December 2007, ACF expects to complete the evaluations of four approaches to delivering independent living services. As required by FCIA, these evaluations will use rigorous scientific standards, such as an experimental research design that randomly assigns youth in independent living programs to different groups: one that is administered the experimental treatment and one that is not. HHS initiated this effort in 2001 with a nationwide review of potentially promising approaches to delivering independent living services. HHS contracted with a research institute to conduct a nationwide search to identify independent living programs that meet the criteria of the evaluation and to conduct 5-year evaluations of the selected programs. On the basis of the search and the established criteria, HHS selected four programs for the evaluation (see table 7). The study is designed to answer the following questions: (1) How do the outcomes of youth randomly assigned to the identified interventions compare with those of youth who are assigned to “services as usual”? (2) For the identified programs, what are the features of these programs that are likely to influence their impact on youth clients? (3) How are these services implemented? (4) To what extent might these programs be adapted to other locales? (5) What are the barriers to implementation? Each program will be evaluated using similar techniques: in-person structured interviews to establish a baseline and to follow up with youth in the treatment and control groups; a Web-based survey of caseworkers; and program site visits including semistructured interviews with administrators, staff, and youth. All youth will be interviewed shortly following referral and random assignment, and 1 year and 2 years later. As of August 2004, all evaluation studies were in the early stages. Baseline interviews with the youth had begun or were completed in three sites and the process was starting in the fourth site. Many youth in the foster care system need additional services and support throughout and beyond their adolescence to make the transition to self- sufficiency. States have generally expanded their independent living programs to provide new and enhanced services to a wider age range of youth, but some states have been slower to implement the program, and foster youth across the nation may not have access to the full array of services they may need to lead independent and successful lives. While many other federal, state, and private resources exist to cover some shortfalls in service, the absence of information on resources available in local areas may continue to hinder efforts to establish needed linkages among programs. Similarly, while ACF provides some assistance to states, there is still a lack of awareness about available resources among caseworkers, foster parents, and youth that may further limit youths’ ability to access needed services once emancipated from the foster care system. While the Chafee Program funding is small compared with that of other child welfare programs, effective federal oversight requires reliable information on states’ implementation efforts and results. At a minimum, information from state plans and annual reports could be useful in federal oversight and monitoring. However, the ability of ACF to monitor state performance continues to be hindered by an absence of standard, comprehensive information within and across state plans on each state’s goals, services, and youth outcomes as measured against baselines of past achievement. Oversight is similarly hindered by a lack of standard monitoring practices across ACF regional offices. While ACF is developing an information system that may address some of these limitations, it may be unavailable for several years. In the meantime, additional actions to strengthen federal monitoring of state programs may serve to provide greater assurance of program accountability at the state and federal level. To improve access to the array of services available to youth transitioning out of foster care and assist states in leveraging available resources, HHS should make information available to states and local areas about other federal programs that may assist youth in their transition to self- sufficiency and provide guidance on how to access services under these programs. To improve HHS’s ability to monitor implementation of the Chafee Program, HHS should develop a standard reporting format for state plans and progress reports and implement a uniform process regional offices can use to assess states’ progress in meeting the needs of youth in foster care and those recently emancipated from care. We provided a draft copy of this report to the following agencies for comment: the Departments of Health and Human Services, Education, Labor, Housing and Urban Development, and Justice, and the Social Security Administration. We obtained comments from the Department of Health and Human Services, which are reproduced in appendix III. HHS also provided technical comments, which we incorporated as appropriate. All other agencies did not have any comments on this report. HHS did not comment on our recommendation to make information available to states and local areas about other federal programs that may assist youth in their transition to self-sufficiency and to provide guidance on how to access services under these programs. HHS listed several efforts that they had undertaken to collaborate with other related federal agencies, such as Labor, Justice, and Education, to expand services to youth. While these efforts will help strengthen the relationships among federal agencies and better inform the states, we believe that implementing our recommendation to develop ways to better disseminate such information to state and local child welfare agencies and to provide assistance on ways to leverage these resources can improve services to youth both in and recently emancipated from foster care. HHS disagreed with our recommendation to develop a standard reporting format for state plans and progress reports but said it was taking action to implement a uniform process that its regional offices can use to assess states’ progress in meeting the needs of youth in foster care and those recently emancipated from care. HHS stated that taking action to standardize the reporting format for state plans and annual reports would be overly prescriptive and impose an unnecessary burden on states. HHS added that a significant change under the law was to require states to self- certify their compliance with statutory requirements in their state plan, and that rather than report on performance outcomes, the plan was intended to be a narrative to ensure state adherence to plan requirements and assurances. In addition, HHS reported that when standard data are available through the National Youth in Transition Database, the agency would be better positioned to determine how best to assess state performance. HHS further reported that ACF did provide regional office staff with a checklist to review and approve the first state plan and that in fiscal year 2005, ACF will develop and provide a review protocol to be used in regional office desk reviews of states’ annual progress reports. We continue to believe that strengthening the state reporting process is needed to provide assurance of program accountability at the state and federal level. HHS officials stated that they consider their review of the state plans and annual reports as the primary method the agency uses to monitor states’ use of Chafee Program funds. However, comments by ACF regional officials conducting the oversight reviews—as well as our own review—have shown that the diverse format and content of these documents are insufficient for this purpose. Developing a standard reporting format that states can use for their plans and annual reports would help HHS improve the efficiency of the reporting process by clarifying the broad guidance ACF provides to the states, allowing ACF reviewers to quickly identify states’ progress toward meeting program goals, and thereby reduce the burden of the reporting process currently in place. As we reported, some states have already taken action to establish baselines and goals as well as strategies for action in their state plan that can be linked with information in the annual progress reports to identify areas of strength and needed improvement. HHS should consider these efforts undertaken by states and take a cooperative approach in working with them and cognizant national organizations in developing a standard report format to garner support and reduce perceptions of burden. HHS could, for example, continue its partnership with the workgroup that contributed to the NYTD proposal or convene a session at the annual conference with state independent living coordinators. HHS action to implement our recommendation may also serve to strengthen the usefulness of uniform review protocols that ACF plans to develop for use by regional staff in evaluating state progress during their annual desk reviews of state performance. As agreed with your offices, unless you publicly announce its contents or authorize its release earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretaries of Health and Human Services, Education, Housing and Urban Development, Labor, and Justice; relevant congressional committees; and other interested parties. Copies will be made available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (415) 904-2272 if you or your staff have any questions about this report. Other major contributors to this report are listed in appendix IV. To determine how states’ funding allocations changed to serve youth after the Foster Care Independence Act of 1999 (FCIA), we analyzed federal funding to the 50 states, the District of Columbia, and Puerto Rico for independent living programs before and after the passage of FCIA. We compared state allocations after the passage of FCIA with the numbers of eligible youth in foster care in each state to determine available funding per eligible youth across states. To perform this comparison, we used data reported by states to the Department of Health and Human Services (HHS) in the Adoption and Foster Care Analysis and Reporting System (AFCARS) on the numbers of eligible youth in foster care within each state in federal fiscal year 2002. Since states’ funding allocation are based on AFCARS data, we determined that these data were the best available information for the purposes of this analysis. The procedures the agency uses to assess data quality, which includes identifying out-of-range or missing data, were sufficient for our purposes. To determine the age ranges of youth in foster care eligible for independent living services within each state, we used data reported by states in our 2004 survey of state independent living coordinators. In addition to reviewing these data, we interviewed HHS staff in headquarters and each of the 10 regional offices. To determine the extent to which states expanded independent living services and age groups of foster youth served since the passage of FCIA, as well as what challenges remain, we surveyed all 50 states, the District of Columbia, and Puerto Rico through a Web-based questionnaire. We pretested the survey instrument with administrators of the independent living program in Texas, Florida, Washington, Maryland, and Connecticut. On the basis of the feedback from the pretests, we modified the questions as appropriate. Information about accessing the questionnaire was provided via e-mail. To ensure security and data integrity, we provided each official with a password that allowed him or her to access and complete the questionnaire for his or her state. We received responses from all 50 states and the District of Columbia for a response rate of about 98 percent. Our survey collected a variety of state data, including information on services provided to youth, numbers of youth eligible and served with independent living services, funding for independent living programs, and changes since the passage of FCIA. We designed the survey to parallel several questions from a 1999 GAO survey of states regarding their independent living programs prior to the passage of FCIA in fiscal year 1998. We compared responses between the surveys to identify changes with state independent living programs since the passage of FCIA. The practical difficulties of conducting any survey may introduce errors, known as nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond can introduce unwanted variability into the survey results. We included steps in both the data collection and data analysis stages for the purpose of minimizing such nonsampling errors. In addition to conducting the survey, we visited independent living programs in 4 states (Texas, Florida, Washington, and Connecticut) to obtain more detailed information regarding the provision of independent living services and changes to state independent living programs since the passage of FCIA in 1999. We selected these states to represent a range in size of foster care populations, approaches to the provision of independent living services, federal allocations of independent living funds, and geographic locations. During our state visits, we interviewed state and local child welfare officials, caseworkers, contracted service providers, foster parents, and youth. We also spoke with HHS staff in the central office and each of the 10 regional offices; National Resource Center for Youth Development officials; and child welfare experts from various organizations including the National Independent Living Association, the Chapin Hall Center for Children, and the Casey organizations. To determine to what extent states used other federal and state programs to coordinate the delivery of independent living services to foster youth, we surveyed states using the above-mentioned survey instrument. In addition, during our site visits we interviewed state and local child welfare officials; officials from other state agencies that provide services that may assist youth in their transition to self-sufficiency; as well as contracted service providers, caseworkers, foster parents, and youth. We also spoke with HHS officials in the central and regional offices and officials at a number of federal agencies that are responsible for programs that may benefit transitioning youth. These included the U.S. Departments of Education, Labor, Housing and Urban Development, and Justice; the Social Security Administration; and the Substance Abuse and Mental Health Services Administration. To determine how states and HHS fulfilled the accountability provisions of FCIA, we analyzed Chafee Foster Care Independence Program state plans for 49 states, the District of Columbia, and Puerto Rico for fiscal years 2001-2004 to determine each state’s program goals and strategies. We also analyzed 90 annual progress and services reports (annual reports) that states submitted regarding the progress made in implementing their Chafee plans for fiscal years 2001 and 2002. We obtained the plans and progress reports from the Administration of Children and Families (ACF) regional offices after consulting with ACF’s central office and the National Resource Center for Youth Development. One regional office could not provide us with the state plan for Wyoming and the annual reports for Colorado, Montana, North Dakota, South Dakota, Utah, and Wyoming in the time requested. Therefore, these states are not included in our analysis. In addition, we received only one report for Tennessee (federal fiscal year 2001) and Puerto Rico (federal fiscal year 2002). We developed a data collection instrument (DCI) based on the federal guidance provided that described how states were expected to develop their plans and reports. A DCI was completed for each state plan and annual report, and another staff person reviewed each record for clarity and accuracy. We supplemented this analysis with discussions with state officials and ACF central and regional office officials. In addition, we reviewed ACF’s draft proposals for the National Youth in Transition Database (NYTD) and talked with the contractor staff responsible for development of this system and the multisite evaluation of promising independent living programs. We were not able to obtain the most current information on the NYTD proposals. Therefore, we only have information as recent as August 2003, and our description of NYTD may not accurately describe the final proposal when it becomes available. In addition to those named above, Adam Roye, Catherine Roark, and R. Jerry Aiken made key contributions to this report. Diana Pietrowiak, Luann Moy, Catherine M. Hurley, and Amy Buck also provided key technical assistance. Casey Family Programs. Providing Education Related Supports and Services under the Chafee Independence Act of 1999: Selected State Activities, and Postsecondary Education and Training Voucher Information. Seattle, Washington, May 2003. ——-. Assessing the Effects of Foster Care: Early Results from the Casey National Alumni Study. Seattle, Washington, October 2003. Chapin Hall Center for Children, University of Chicago. Midwest Evaluation of the Adult Functioning of Former Foster Youth: Conditions of Youth Preparing to Leave State Care. February 2004. U.S. Department of Health and Human Services, Administration for Children and Families; Administration on Children, Youth, and Families; Children’s Bureau. Title IV-E Independent Living Programs: A Decade in Review. Washington, D.C., November 1999. Wald, M., and T. Martinez. Connected by 25: Improving the Life Chances of the Country’s Most Vulnerable 14-24 Year Olds. William and Flora Hewlett Foundation Working Paper. Menlo Park, CA: William and Flora Hewlett Foundation, November, 2003. White House Task Force for Disadvantaged Youth Final Report. Washington, D.C., October 2003. D.C. Child and Family Services Agency: More Focus Needed on Human Capital Management Issues for Caseworkers and Foster Parent Recruitment and Retention. GAO-04-1017. Washington, D.C.: September 24, 2004. Child and Family Services Reviews: Better Use of Data and Improved Guidance Could Enhance HHS’s Oversight of State Performance. GAO-04-333 Washington, D.C.: April 20, 2004. Child Welfare: Enhanced Federal Oversight of Title IV-B Could Provide States Additional Information to Improve Services. GAO-03-956. Washington, D.C.: September 12, 2003. Child Welfare: Most States Are Developing Statewide Information Systems, but the Reliability of Child Welfare Data Could Be Improved. GAO-03-809. Washington, D.C.: July 31, 2003. Child Welfare and Juvenile Justice: Federal Agencies Could Play a Stronger Role in Helping States Reduce the Number of Children Placed Solely to Obtain Mental Health Services. GAO-03-397. Washington, D.C.: April 21, 2003. Child Welfare: HHS Could Play a Greater Role in Helping Child Welfare Agencies Recruit and Retain Staff. GAO-03-357. Washington, D.C.: March 31, 2003. Foster Care: Recent Legislation Helps States Focus on Finding Permanent Homes for Children, but Long-Standing Barriers Remain. GAO-02-585. Washington, D.C.: June 28, 2002. Workforce Investment Act: Improvements Needed in Performance Measures to Provide a More Accurate Picture of WIA’s Effectiveness. GAO-02-275. Washington, D.C.: February 1, 2002. Child Welfare: New Financing and Service Strategies Hold Promise, but Effects Unknown. GAO/T-HEHS-00-158. Washington, D.C.: July 20, 2000. Foster Care: States’ Early Experiences Implementing the Adoption and Safe Families Act. GAO/HEHS-00-1. Washington, D.C.: December 22, 1999. Foster Care: HHS Could Better Facilitate the Interjurisdictional Adoption Process. GAO/HEHS-00-12. Washington, D.C.: November 19, 1999. Foster Care: Effectiveness of Independent Living Services Unknown. GAO/HEHS-00-13. Washington, D.C.: November 10, 1999. Foster Care: Kinship Care Quality and Permanency Issues. GAO/HEHS-99-32. Washington, D.C.: May 6, 1999. Juvenile Courts: Reforms Aim to Better Serve Maltreated Children. GAO/HEHS-99-13. Washington, D.C.: January 11, 1999. Child Welfare: Early Experiences Implementing a Managed Care Approach. GAO/HEHS-99-8. Washington, D.C.: October 21, 1998. Foster Care: Agencies Face Challenges Securing Stable Homes for Children of Substance Abusers. GAO/HEHS-98-182. Washington, D.C.: September 30, 1998.
To improve outcomes for youth leaving foster care, Congress passed the Foster Care Independence Act of 1999 (FCIA), which increased the allocation of federal funds for independent living programs from $70 million to $140 million. This report reviews (1) how states' funding allocations changed to serve youth after FCIA, (2) the extent to which states have expanded services and age groups of foster youth served since the passage of FCIA and what challenges remain, (3) the extent to which states have used other federal and state programs to coordinate the delivery of services to foster youth, and (4) how the states and the Department of Health and Human Services (HHS) have fulfilled the program accountability provisions of the law and assessed the effectiveness of independent living services. The doubling of federal funding for independent living programs has resulted in most states receiving an increase in funds. Although some states had difficulty expanding their program infrastructure in the first 2 years of increased funding, the amount of funds states returned to HHS declined the second year. Differences in funding also appeared in the amounts available per eligible foster care youth. Following the passage of FCIA, 40 states reported in our survey expanding independent living services to younger youth, and 36 states expanded services to older youth, but gaps remain in providing some key services to youth. State differences in serving youth may have been caused by gaps in the availability of critical services, such as mental health services, mentoring, and housing, as well as challenges engaging youth and foster parents to participate in the program. Almost all states that we surveyed reported increased levels of coordination under FCIA, but linkages with other federal and state youth-serving programs were not always in place to increase services available across local areas. Despite some coordination efforts, states may not make full use of available resources. One of the barriers in linking program services reported by the 4 states we visited included the inconsistent availability of information on the array of programs that were operating in each state and local area. States and HHS have taken action to fulfill the accountability provisions of FCIA, but little information is available to assess the effectiveness of independent living services. All states submitted required plans and reports, but the absence of a uniform reporting format and lack of standard monitoring practices among HHS regional offices hindered assessments of state performance. HHS is developing an information system that may improve program accountability and reported that it expects to issue a proposed regulation in 2005.
Within DHS, USCIS is responsible for adjudicating immigration benefit applications, including I-129Fs filed by U.S. citizens to bring a foreign national fiancé(e) to the United States through a K-1 visa. If the K-1 visa is issued, the INA provides that the petitioner and fiancé(e) must marry within 90 days of the fiancé(e)’s admission into the country, after which the K-1 visa expires. The I-129F petition can also be used to bring a noncitizen spouse to the United States under a K-3 visa while awaiting the approval of an immigrant petition and availability of an immigrant visa. Noncitizen fiancé(e)s, upon marriage to the petitioner, and noncitizen spouses who are admitted to the United States must then apply to adjust their status to lawful permanent resident by filing with USCIS a Form I-485, called an Application to Register Permanent Residence or Adjust Status. In fiscal year 2013, USCIS approved 30,400 I-129F petitions and State issued 30,290 K visas. general, has declined since fiscal year 2008, with the exception of fiscal year 2011, during which there was a slight increase over the previous fiscal year. USCIS approved the majority of I-129F petitions submitted from fiscal year 2008 through fiscal year 2013 (see fig. 1). The number of I-129F petitions approved in a fiscal year will not equal numbers of K visas issued in the same fiscal year because, for example, State may adjudicate the visa applications in a subsequent fiscal year. Both USCIS and State’s Bureau of Consular Affairs play key roles in providing information about petitioners to beneficiaries. In accordance with IMBRA, once a USCIS officer approves an I-129F petition, USCIS must forward the approved I-129F petition and relevant information to State, which mails to the beneficiary these materials and the IMBRA pamphlet—an informational document that outlines the legal rights and resources for immigrant victims of domestic violence. According to State’s FAM, consular officers must also discuss the pamphlet and petitioners’ criminal history information during the K visa applicant interview to ensure that the beneficiary understands his or her legal rights and access to victim services in the United States and has available information about the petitioner. IMBRA also establishes disclosure and other requirements for IMBs to help inform and provide greater assurance for the safety of beneficiaries who meet their potential U.S. citizen petitioners through an IMB. For example, IMBRA requires that IMBs collect specified information, such as criminal arrest and conviction information, from petitioners for disclosure, and obtain written approval from beneficiaries before releasing beneficiaries’ contact information to potential petitioners. DOJ is responsible for pursuing civil and criminal penalties under IMBRA and, pursuant to the Violence Against Women Reauthorization Act of 2013, was required to report to Congress on, among other things, the policies and procedures for consulting with DHS and State in investigating and prosecuting IMBRA violations by petitioners and IMBs. USCIS has implemented processes to collect information from petitioners; however, USCIS is in the process of revising the current version of the I- 129F petition to address errors or limitations that may limit or otherwise affect the accuracy of petitioners’ disclosure to USCIS of all information required by IMBRA. The I-129F petition, along with any supporting documentation submitted by the petitioner, is USCIS’s primary source for information on petitioners’ prior I-129F petition filings and criminal convictions—key information that the U.S. government is required under IMBRA to obtain and disclose to beneficiaries. In particular, USCIS uses the information disclosed through the I-129F petition to (1) inform its criminal background checks of petitioners, and (2) determine if petitioners have filed prior I-129F petitions and are requesting one of the three IMBRA waivers, as appropriate. Conducting background checks on petitioners. Pursuant to IMBRA, USCIS conducts criminal background checks on each petitioner using the information provided on the I-129F petition. Specifically, according to USCIS’s standard operating procedures, USCIS officers are to conduct background checks using petitioners’ names and dates of birth against the TECS database within 15 days of receiving I-129F petitions. During a background check, if a TECS query returns a “hit,” USCIS officers are to forward this information to the background check unit located within each service center for further review. According to USCIS service center officials, the completeness of the criminal background information contained within TECS is dependent on the extent to which state and local law enforcement agencies enter complete information into the Federal Bureau of Investigation’s (FBI) National Crime Information Center database (NCIC). USCIS is to subsequently provide this information to State, whose consular officers are to share this information with beneficiaries during the K visa interview. When sharing this information with the beneficiaries, consular officers must also inform them that the criminal background information is based on available records and may not be complete, something the IMBRA pamphlet also notes. Consistent with IMBRA, the waivers for filing limits apply only to K-1 petitioners. See § 1184(d). officers are to request that the petitioner provide the aforementioned waiver request letter and supporting evidence before deciding whether to approve or deny the petition. USCIS may deny a waiver request if the petitioner fails to provide sufficient documentation in support of the waiver within 12 weeks, or if the documentation provided does not justify granting a waiver. USCIS officers may also deny an I-129F petition if they discover that a petitioner does not, for example, fully disclose an IMBRA-specified offense conviction or protective order information. According to USCIS’s standard operating procedures, USCIS officers are to use the information obtained from the background check, CLAIMS 3 data on prior filings, and the I-129F petition and supporting evidence to determine if petitioners have disclosed all of the information required by IMBRA. However, the I-129F petition contains errors and omissions that we, USCIS, and DOJ have identified and that may limit or affect the accuracy of information disclosed by a petitioner. Specifically, the I-129F petition inaccurately describes IMBRA’s filing limits and does not fully address IMBRA’s disclosure requirements. In particular, the language on the I-129F petition states that the filing limits apply to petitioners who have filed three or more I-129F petitions, or who have filed three or more I-129F petitions and the first I-129F petition was approved within the last 2 years, whereas the instruction accompanying the I-129F petition aligns more closely with IMBRA and provides that a waiver is required if a prior I-129F petition had been approved in the past 2 years. USCIS Service Center Operations officials stated the I-129F petition does not accurately describe the filing limits and therefore there is a risk that petitioners are disclosing inaccurate information regarding their filing history on the I- 129F petition, which may affect how USCIS evaluates whether a petitioner requires a waiver. In October 2014, in response to our audit work, USCIS modified its website to inform petitioners that the petition is inaccurate and provide them with instructions that clarify the requirements. In addition, DOJ officials responsible for enforcing IMBRA stated that they have been working with USCIS on revisions to the I-129F petition to better ensure that IMBRA’s disclosure requirements are met. For example, USCIS Service Center Operations officials noted that, in consultation with DOJ, they plan to include questions on the I-129F petition regarding whether petitioners have civil protective or restraining orders, and prior arrests or convictions related to prostitution. According to USCIS Service Center Operations officials, as of August 2014, USCIS has been in the process of revising the current version of the I-129F petition. According to A Guide to the Project Management Body of Knowledge, which provides standards for project managers, specific goals and objectives should be conceptualized, defined, and documented in the planning process, along with the appropriate steps, time frames, and milestones needed to achieve those results. USCIS Service Center Operations officials stated that there is no target time frame for completing revisions to the I-129F petition within USCIS before DHS and the Office of Management and Budget (OMB) undertake their respective reviews, in part because of the interagency review process among DHS, State, and DOJ, which was ongoing for approximately 10 months as of August 2014. USCIS officials noted that until revisions to the I-129F petition are complete, petitioners can refer to the Form I-129F Instructions, which USCIS makes available as a separate document, or to the clarifying instructions added to its website in October 2014, and USCIS officers should follow the I-129F SOP, each of which more accurately describe IMBRA’s filing limits and circumstances under which a waiver must be requested. However, USCIS officials acknowledged that petitioners may not use the instructions in completing the I-129F petition since they are contained in a separate document and are not referred to on the I-129F petition. Further, as we discuss later in this report, our review of CLAIMS 3 data indicates that USCIS officers have not consistently followed the I-129F SOP, which USCIS has modified multiple times since the summer of 2013 to address, among other things, inaccuracies in the language associated with the application of IMBRA waivers. USCIS has previously revised the I-129F petition. For example, in July 2007, USCIS revised the I-129F petition to require that petitioners disclose criminal convictions, prior I-129F filings, and the use of IMBs.June 2013, USCIS further revised the I-129F petition by adding, among other things, a section for USCIS officers to denote for State officials whether the I-129F petition contains prior filing or criminal history information that must be disclosed to the beneficiary. According to USCIS Service Center Operations officials, including the time for public comment and OMB’s review of the proposed revisions, it took nearly 2 years to issue the revised I-129F petition (issued in June 2013). USCIS officials noted that until the revisions to the I-129F petition are completed, the agency is at risk of not collecting complete information from petitioners. Establishing time frames for when USCIS will complete its review of the I- 129F petition would help the agency better monitor progress toward finalizing revisions to the petition, which are intended to ensure that IMBRA’s disclosure requirements are met. State has established processes to disclose and provide IMBRA information to beneficiaries, such as petitioners’ criminal history information, prior I-129F filings, and the IMBRA pamphlet, in accordance with IMBRA and agency guidance to consular officers. However, State’s consular officers have not consistently documented that beneficiaries, at the time of their in-person interviews, received all of the required information. Relevant guidance to consular officers, found in State’s FAM, outlines procedures consular officers are to follow, including requirements for documenting that beneficiaries have received the required information. State officials indicated that, in accordance with IMBRA, beneficiaries are provided with IMBRA information and disclosures at two points in the K visa application process—(1) in the mailing of the K visa application package and (2) at the in-person visa interview, where disclosure is to be documented. Application package. State’s FAM requires that upon receipt of the approved I-129F petition and other information from USCIS, consular officers provide IMBRA-related disclosures and the IMBRA pamphlet to beneficiaries by mail as part of an application package. In August 2008, we recommended that DHS and State develop a mechanism to ensure beneficiaries are notified of the number of previously filed I-129F petitions by the petitioner. In response, in October 2008, State revised its guidance to consular officers to require that the application package include the approved petition.posts we interviewed, their respective posts mailed the IMBRA disclosures, pamphlet, and approved petitions to beneficiaries in advance of the in-person visa interview. Consular officials at one post we interviewed in June 2014 stated that they did not mail IMBRA-related disclosures, such as the I-129F petition containing criminal history information, to beneficiaries in advance of their interviews because of limitations in the post’s support contract for mail services. Rather, this post provided the IMBRA-related disclosure information to beneficiaries only during the K visa interview. As a result of our audit work, State’s Consular Affairs Bureau officials in Washington, D.C., provided guidance to this post to ensure that consular officers mail IMBRA-related disclosure information and the IMBRA pamphlet to all beneficiaries prior to visa interviews, in accordance with IMBRA and FAM requirements. Not all beneficiaries who are sent an application package schedule a K visa interview, according to Consular Affairs officials. Ultimately, consular officers said there are various reasons an applicant might not apply for a visa, and could not say to what degree the information provided in the mailings in advance of the interviews was a factor in this decision. According to consular officers at four of five consular Applicant interview. Consular officers are to provide the beneficiary with information about the petitioner’s criminal history and prior I-129F petition filings and the IMBRA pamphlet in the beneficiary’s primary language during the K visa interview, in accordance with IMBRA, and allow time for beneficiaries to consider the information. State requires consular officers to document within its IVO database whether they made all of these disclosures to the beneficiary during the visa interview. For example, State’s FAM requires consular officers to denote within IVO that the “IMBRA pamphlet was received, read, and understood” for each K visa beneficiary. According to Consular Affairs officials, other than this FAM requirement for documentation in the consular notes in IVO, State does not have other mechanisms by which it ensures that consular officers are providing required information to K visa applicants during the in-person interviews. Regarding the remaining 80 of the 227 cases, State consular officials in Washington, D.C., said that possible reasons the interview may not have taken place are that the interview has yet to be conducted with the K visa applicant, or the case may not have been sent from USCIS to State for adjudication. notations in accordance with FAM guidance. Specifically, we found that consular officers fully documented that the IMBRA pamphlet was received, read, and understood in 21 of the 84 cases (or about 25 percent); however, we found that in 15 of the 84 cases (or about 18 percent),consular officers partially documented that the IMBRA pamphlet was provided to the beneficiaries. In the cases for which consular officers provided partial notations, we found that the notes varied from “IMBRA given” to “domestic violence brochure given.” Moreover, in 63 of the 147 cases where State’s data indicated that consular officers had interviewed beneficiaries (but for which there was no corresponding USCIS record of the beneficiary requesting a change to Lawful Permanent Resident status) we found in 28 (or about 44 percent) of these 63 cases that consular officers did not document that the IMBRA pamphlet was received, read, and understood by beneficiaries. Full documentation regarding the IMBRA pamphlet was noted in 26 (or about 41 percent) of these 63 cases, and was partially noted in 9 (or about 14 percent) cases. In our guide for assessing strategic training and development efforts, we have reported that training is essential to developing the knowledge and skills needed to administer agency programs properly.Consular Affairs officials, both the FAM and a relevant guidance cable on IMBRA implementation clearly describe the documentation requirements for the disclosure of information to beneficiaries during interviews, and accordingly, these officials attributed the lack of documentation in IVO to consular officer error. State last issued a cable on IMBRA implementation, which covers the FAM’s IMBRA-related documentation requirements, to consular posts in 2012. According to Consular Affairs officials, State generally does not send frequent cables to overseas posts to reiterate FAM requirements unless there are significant changes to sections of the FAM that warrant additional guidance or explanation to consular officers. However, these officials stated that they planned to According to send another cable to all overseas posts in the fall of 2014, given recent revisions to the FAM on IMBRA implementation. In response to our work, they said that they could include a reminder in that cable for consular officers to follow the FAM’s IMBRA-related documentation requirements. We reviewed a draft of that cable in October 2014, and it includes, among other things, a reminder for officers to document in IVO that the IMBRA pamphlet was received, read, and understood for all K visa applicants. While the cable may be a helpful reminder for incumbent consular officers, State’s consular officer training courses do not specifically address the FAM’s IMBRA-related documentation requirements. Standards for Internal Control in the Federal Government maintains that federal agencies are to provide staff with the training necessary to carry out assigned responsibilities and meet the organizations’ objectives. According to Consular Affairs officials, State offers two key courses to consular officers through its Foreign Service Institute on the adjudication of immigrant visas, including K visas—mandatory basic training for entry- level officers and a voluntary course for midlevel consular officers offered four times a year. However, these officials stated that the training courses are generally broad and comprise many different types of nonimmigrant visas and so do not cover as part of their curricula detailed procedures for all visa types such as the FAM’s IMBRA-related documentation requirements. For instance, State’s Foreign Service Institute officials stated that the basic training course briefly covers State’s IMBRA-related disclosure requirements in the instructor’s notes, but does not address the FAM’s requirement for consular officers to document these disclosures in IVO. Similarly, the voluntary course for midlevel consular officers does not address the FAM’s documentation requirements. A Consular Affairs official stated that there may be some variation in the content of this course offered to midlevel consular officers, but when he teaches the course, he chooses to cover the FAM’s IMBRA-related documentation requirements in his oral remarks. Moreover, Consular Affairs officials stated that midlevel consular officers are to provide training to entry-level officers on a routine basis on the FAM’s IMBRA- related disclosure and documentation requirements. These officials added that State has an internal website for consular training, which includes a reminder for supervisory consular officers that orientation upon arrival and continuing on-the-job training at post is vital to develop fully proficient consular officers. Incorporating the FAM’s IMBRA-related documentation requirements into State’s training courses for consular officers could help State better ensure that consular officers are aware of the requirements so that they can be better positioned to more consistently document the disclosure of IMBRA information during interviews with K visa applicants. Under IMBRA, DOJ is responsible for pursuing federal civil and criminal penalties outlined in the law for IMBs and petitioners who violate IMBRA provisions and for consulting with DHS and State in investigating and prosecuting such violations. However, DHS and State have not identified any potential IMBRA violations for referral to DOJ. As it has not received any referrals of IMBRA violations, DOJ has not brought civil or criminal cases against an IMB or petitioner under IMBRA. USCIS requests information on the I-129F petition regarding whether petitioners used an IMB and, if so, requests a copy of the signed consent form the IMB obtained from the beneficiary authorizing the release of contact information. However, USCIS officials at each of the four service centers we interviewed stated that, in their experience, few petitioners indicate the use of IMBs to facilitate relationships with their foreign fiancé(e)s, and accordingly, the agency has not referred cases to DOJ for further investigation and prosecution. In addition, DHS has a process for referring and investigating potential violations within the department; however, USCIS has not identified any potential violation for referral and investigation. In accordance with the FAM and consistent with IMBRA, if an IMB does not provide the required IMBRA disclosures to the beneficiary, consular officers are to note the lack of disclosure in IVO and refer the case to State’s Consular Affairs Bureau at headquarters for further review. Consular Affairs officials in headquarters are responsible for forwarding cases involving potential IMBRA violations to DOJ. Consular officers at all five consular posts we interviewed stated that they have not referred cases involving violations by IMBs for review because beneficiaries generally do not disclose the use of IMBs during the visa applicant interviews. In July 2013, DOJ reported to Congress on the status of DOJ, DHS, and State’s efforts to develop processes to effectively identify, investigate, and prosecute potential IMBRA violations. DOJ reported that it does not have sufficient information about the nature and potential volume of IMBRA violations necessary to develop a framework for prosecution. DOJ’s report outlined a number of actions each agency could address to more fully develop policies and procedures for identifying, investigating, and prosecuting IMBRA violations, such as developing mechanisms to better facilitate the sharing of IMBRA-related case notes among the agencies. DHS and State officials told us that they are coordinating with DOJ on ways to facilitate data collection and information sharing and that it is too early to determine when these actions may be completed. DOJ officials stated that the agency-specific actions will better position DHS and State to identify cases warranting investigation and prosecution by DOJ. For instance, as previously mentioned, DOJ proposed that USCIS consider revising the I-129F petition to include a question for petitioners about civil protective or restraining orders consistent with IMB disclosure requirements under IMBRA. In addition, DOJ proposed that State establish a mechanism for sharing IMBRA-related case notes from beneficiary interviews with USCIS and DOJ. Moreover, DOJ is working with State on the development of a checklist of questions for consular officers to ask beneficiaries to assist in the identification of potential cases involving IMBRA violations by IMBs. In October 2014, DOJ issued an IMBRA bulletin to assist stakeholders, such as state and local law enforcement entities and women’s and immigrants’ rights organizations, in identifying and reporting IMBRA violations to DOJ for prosecution. IMBRA mandates that DHS collect and maintain data necessary for us to review IMBRA’s impact on the process for granting K nonimmigrant visas. In 2008, we reported that while USCIS had collected some data necessary for our study, most of the eight data elements identified by IMBRA and on which we reported were not maintained in a summary or reportable (i.e., electronic) format. For this report, we reexamined these eight data elements, which include information on the number of waiver applications submitted and I-129F petitions denied, and the reasons for the decisions. We found that data for two of the eight required elements are available, at least partially, in an electronic format in CLAIMS 3 and reliable for our purposes. The remaining six elements were either not collected and maintained electronically or the electronic data collected are not reliable. For example, consistent with IMBRA, USCIS is to collect and maintain information annually on the number of IMBRA waivers (general, criminal, or mandatory) submitted, the number granted or denied, and reasons for such decisions, but this information is not collected and maintained electronically. Rather, USCIS collects and maintains information on whether a waiver is required (rather than submitted), and the reasons for their decisions are handwritten by the officer on the hard copy of the petition and thus were not readily available for purposes of our review. Table 1 identifies the eight data elements specified by IMBRA and the extent to which USCIS collects and maintains reliable electronic data. USCIS has taken or is planning to take steps to better collect and maintain data from petitioners in an electronic format. For example, in 2008, we reported that USCIS was considering modifying its system to electronically collect and maintain the required data, and in 2012, USCIS updated CLAIMS 3 to address selected IMBRA requirements. Specifically, USCIS updated CLAIMS 3 to include a field for officers to note the number of I-129F petitions previously filed by the current petitioner, as well as a field to denote whether petitioners require any of the three IMBRA waivers, although these updates do not specifically address the IMBRA requirement that annual data on the number of waiver applications submitted, the number approved and denied, and reasons why the waivers were approved or denied be collected and maintained. These updates have helped USCIS collect and maintain additional data on I-129F petitions in an electronic format. However, USCIS did not update CLAIMS 3 to capture all of the data required by IMBRA, including the number of concurrent I-129F petitions filed by petitioners for other fiancé(e)s or spouses, or the extent to which petitioners have criminal convictions. USCIS officials stated that they did not include all elements in the 2012 system update because of resource constraints and to avoid rework in anticipation of the larger transition planned for all of USCIS’s immigration benefit processes. In 2006, USCIS embarked on its multiyear Transformation Program to transform its paper- based immigration benefits process to a system with electronic application filing, adjudication, and case management. As we reported in November 2011, USCIS envisions that once the Transformation Program is completed, new electronic adjudication capabilities will help improve agency operations and enable greater data sharing and management of information. USCIS expects the new system, the Electronic Immigration System (ELIS), to have features, for example, that will allow applicants to electronically view their benefit requests, or provide additional documentation. Once ELIS is implemented, officers are expected to have electronic access to applications, as well as relevant USCIS policies and procedures to aid in decision making, and to have electronic linkages with other agencies, such as State and DOJ, for data-sharing and security purposes. According to USCIS Service Center Operations officials, the agency will be able to collect and maintain more complete data, in a manner consistent with IMBRA, through the deployment of the electronic I-129F petition in ELIS. However, USCIS has faced long-standing challenges in implementing ELIS, which raise questions about the extent to which its eventual deployment will position USCIS to collect and maintain more complete data. In particular, in November 2011, we reported on USCIS’s progress in implementing its Transformation Program and found that USCIS had not developed reliable or integrated schedules for the program, and as a result, USCIS could not reliably estimate when all phases of the Transformation Program would be complete. We recommended, among other things, that USCIS ensure its program schedules are developed in accordance with GAO’s best practices guidance. DHS concurred with our recommendations and outlined actions USCIS would take to implement them, including developing an integrated master schedule to depict the multiple tasks, implementation activities, and interrelationships needed to successfully develop and deploy the Transformation Program. Since our November 2011 report, the Transformation Program schedule has encountered further delays. The 2008 Acquisition Program Baseline for the program showed that ELIS would be fully deployed by 2013; however, in July 2014, the Director of USCIS testified that full deployment was expected to be completed by 2018 or 2019.the I-129F petition would be deployed in ELIS. GAO/AIMD-00-21-3.1. purposes, or DOJ for investigating potential IMBRA violations once the Transformation Program is complete. USCIS officers have not consistently adjudicated I-129F petitions or entered complete and accurate data into CLAIMS 3. On the basis of our review of CLAIMS 3 data, and interviews with USCIS Service Center Operations officials and USCIS officers at all four service centers, we identified errors related to the IMBRA data that USCIS has maintained since 2012 (see table 1). Specifically, our analysis indicates that USCIS’s data are not reliable for determining (1) the number of I-129F petitions filed by persons who have previously filed I-129F petitions (or multiple filers), or (2) the number of IMBRA waivers required. The May 2014 revisions also highlighted that the multiple filer field in CLAIMS 3 should include the total number of K-1 and K-3 I-129Fs filed by the petitioner. The August 2013 SOP did not specify the type of I-129F (K-1 versus K-3) to include in determining the number of prior petitions. officers were counting both K-1 and K-3 I-129F petitions in total for the multiple filer field, or only the number of K-1 I-129F petitions. The May 2014 revision to the SOP emphasized that I-129F petitions for K-3 visas are not to be included in determining whether a waiver is required. However, at one service center we visited, officers we spoke to stated that they had been uncertain about whether both types of I-129F petitions should be considered for the waiver requirements. Accurate and complete data in the multiple filer field are important for identifying potential abuse by petitioners who file multiple I-129F petitions, and for officers to indicate when a beneficiary should be notified of multiple filings, according to USCIS officials. Data on IMBRA waivers. We found instances of errors and inconsistencies related to USCIS data on whether petitioners were subject to IMBRA’s filing limits and required one of the three waivers. Specifically: According to IMBRA and the June 2014 SOP, petitioners may be required to request one of three waivers, and the waiver requirements are based, in part, on the number of I-129F petitions filed for K-1 visas only (petitions for K-3 visas are not to be included). We reviewed USCIS data on 227 I-129Fs filed from October 1, 2012, through March 31, 2014, for which the record in CLAIMS 3 indicated that a criminal waiver was required. We found that 18 of those 227 I-129F petitions were for K-3 visas. USCIS Service Center Operations acknowledged that these entries in CLAIMS 3 were incorrect and that these errors raise questions about the reliability of the CLAIMS 3 data and officers’ understanding of standard operating procedures and IMBRA requirements. According to the June 2014 I-129F SOP, USCIS officers are to indicate in CLAIMS 3 whether a petitioner is required to have one of the three filing limits waivers. Officers are required to note a “Y” in one of three data fields if a waiver is required, or “N” if the waiver is not required. Consistent with IMBRA, only one waiver could apply per petition. However, on the basis of our analysis of CLAIMS 3 data, we found I-129F petitions for which officers incorrectly determined that more than one waiver was required. Specifically, of the 227 I-129F petitions we reviewed, 11 indicated that both a general and a criminal waiver were required, 14 indicated that both a criminal wavier and a mandatory waiver were required, and 15 indicated that petitioners required all three waivers. USCIS Service Center Operations officials attributed the multiple waiver determinations to officers’ errors. USCIS officers we interviewed at one service center stated that they were uncertain about the requirements for the waivers in part because the majority of petitions they adjudicate each year do not require any waivers. The August 2013 SOP did not specifically contain guidance to officers that a petitioner could receive only one waiver, if appropriate. In June 2014, during the course of our audit work, USCIS updated the I-129F SOP to clarify the filing limits and waiver requirements and now explicitly states that only one waiver selection per I-129F petition should be marked in CLAIMS 3, as applicable. While this revision to the SOP is a positive step, additional training could better position USCIS officers to be aware of petitioners’ potential filing limits and IMBRA waiver requirements, and USCIS officials stated that such training could be provided to help ensure officers understand the IMBRA requirements. Consistent with IMBRA and the June 2014 I-129F SOP, a criminal waiver is required for multiple filers who have been convicted of an IMBRA-specified offense. However, our analysis of USCIS’s data indicates that officers have required criminal waivers for petitioners with no prior I-129F petition filings. Specifically, of the 227 I-129F petitions filed between March 2012 and March 2014 for which officers had indicated that a criminal waiver was required, 207 did not meet the criteria requiring a criminal waiver because the petitioner had not filed any previous petitions. USCIS officials said that officers were likely confused regarding when a criminal waiver was required and speculated that officers may be erring on the side of caution and requiring a criminal waiver and additional documentation from the petitioner in any instance of prior criminal convictions. For example, an officer at one service center we visited stated that he sends the petitioner a request for evidence for a criminal waiver if there is a criminal history, regardless of how many I-129F petitions have been filed. Ensuring that officers have a clear understanding of waiver requirements in the SOP could help better position USCIS officers to make USCIS adjudications more consistent with IMBRA requirements. Consistent with IMBRA and the June 2014 I-129F SOP, I-129F petitions for K-3 visas are not subject to IMBRA waiver requirements. However, USCIS officers have historically (prior to December 8, 2013) not been required to indicate in CLAIMS 3 whether the I-129F petition is in support of a K visa for a fiancé(e), or spouse. We found that about 72 percent of the I-129F petitions submitted from fiscal years 2008 through March 2014 (238,288 of the 329,307) did not indicate whether the I-129F petition was for a K-1 or K-3 visa. USCIS officials stated that this was a technical issue that was likely overlooked during the system change in 2008. USCIS officials indicated that beginning in December 2013, officers could not approve an I-129F in CLAIMS 3 without noting which of the K visas the I-129F supports. Knowledge of whether the I-129F petition is for a K-1 or K-3 beneficiary is important because it is a key factor in determining whether a waiver is required, according to USCIS officials. While USCIS officers can review the hard copy I-129F petition to determine if it is an I-129F petition for a K- 1 or K-3 beneficiary, this information would not be readily available for internal control purposes of ensuring I-129F petitions are adjudicated according to the SOP and consistent with IMBRA. According to USCIS Service Center Operations officials, USCIS performs annual quality assurance reviews of I-129F petitions. USCIS’s Quality Management Branch establishes the direction for the development and administration of the quality assurance program, training, communication, and coaching, and each service center has a quality manager and personnel who ensure administration of the quality assurance program within each center. Annual reviews include 3 months of submissions, reviewed for adherence to USCIS procedures for petition approval, denial, and requests for evidence. In 2014, USCIS’s quality assurance reviews of selected I-129F petitions identified inconsistencies in their adjudication. For example, USCIS conducted a review on a random sample of I-129F petitions approved at the Texas Service Center in April 2014 (63 out of 796 total approved I-129F petitions). This quality assurance reviewer found that 9 out of the 63 approved I-129F petitions did not indicate for State’s consular officers, as required by USCIS’s procedures, whether IMBRA disclosures applied. Consular officers we spoke to at one post stated that they were providing information to beneficiaries only if USCIS officers clearly indicated on the approved I- 129F petition that IMBRA requirements applied. The consular officers stated that if USCIS officers did not clearly notate the approved I-129F petitions, they returned the approved I-129F petition to USCIS. USCIS officials attribute the errors in CLAIMS 3 data to officer error and misunderstanding of the SOPs regarding IMBRA implementation. In response to these reviews and our audit work, Service Center Operations officials stated that, among other things, they revised the I-129F SOP in May 2014 and again in June 2014. In particular, the May 2014 revision to the I-129F SOP was intended to clarify, among other things, the IMBRA filing limits, waiver requirements, and notations indicating whether IMBRA disclosures apply. In June 2014, USCIS again revised the procedures to further clarify the waiver requirements. To disseminate SOP revisions, a Service Center Operations official stated that the revised SOP is e-mailed to a point of contact in each service, with the revisions highlighted in the SOP and e-mail. The official said that the point of contact generally distributes the revised SOP to officers via e-mail, and will meet with staff to discuss changes, if necessary. While these are positive steps, additional training could help provide USCIS with more reasonable assurance that its officers are aware of IMBRA requirements to assist them in reviewing and maintaining data on petitions consistent with USCIS’s procedures. As previously discussed, our analysis of CLAIMS 3 data showed that USCIS officers have not entered information into CLAIMS 3 consistent with USCIS’s SOPs. USCIS Service Center Operations officials attributed the errors we identified in the CLAIMS 3 data to officers’ misunderstandings of the required procedures. Service Center Operations officials said in August 2014 that they had no plans to require the service centers to provide additional training to officers on revisions made to the SOP, as USCIS officials stated that officers receive initial training when they are hired and on an ad hoc basis at the service centers, as necessary. USCIS Service Center Operations does not require service centers to conduct additional training for incumbent officers based on revisions to its SOPs to ensure that changes are understood. Rather, these officials stated that service centers determine when officers need additional training, which they may provide to officers in the form of e-mails, briefings, or formal classroom lessons. Standards for Internal Control in the Federal Government maintains that federal agencies are to provide staff with the training necessary to carry out assigned responsibilities and meet the organizations’ objectives. Moreover, in our guide for assessing strategic training and development efforts, we have reported that training is essential to developing the knowledge and skills needed to administer agency programs properly. Given that the SOP has been revised three times in less than 1 year and officers have not maintained data in CLAIMS 3 consistent with the SOP, additional training for officers could help USCIS better ensure its officers understand changes made to the SOPs and collect and maintain reliable data on I-129F petitions as required by USCIS’s SOP and consistent with IMBRA. In accordance with IMBRA, USCIS has been charged with mitigating the risk posed to beneficiaries by violent or abusive petitioners by ensuring, to the extent practicable, that petitioners disclose complete information, including their filing history and criminal conviction information, on the I- 129F petition. USCIS has been revising the I-129F petition to address inaccuracies and deficiencies for more than 10 months and has not set a time frame for the planned completion of these changes. A time frame for completion would help the agency better monitor progress toward finalizing revisions to the petition. In addition, State could take additional actions to ensure that its consular officers document that the IMBRA pamphlet is provided and understood by the beneficiary, as internal State guidance requires, by revising its curriculum to include training on the FAM’s IMBRA-related documentation requirements. By incorporating IMBRA-related documentation requirements in its training curricula, State could also better provide reasonable assurance that its officers are aware of the required procedures and are better positioned to inform beneficiaries so they know their legal rights. Although IMBRA was enacted in January 2006, USCIS does not yet collect and maintain all data in a manner consistent with IMBRA. Ensuring the data are available electronically would allow for more complete reporting on IMBRA implementation, and also help USCIS management to better ensure that I-129F petitions are being adjudicated in accordance with IMBRA. USCIS has begun the process of transforming the I-129F petition to an electronic format; however, it is uncertain what data will be maintained in ELIS, based on the agency’s draft user stories to identify data requirements, and based on prior USCIS efforts that did not fully capture data in an electronic format consistent with IMBRA. Taking steps to ensure that all data to be collected in accordance with IMBRA are included with the release of the electronic I-129F petition, and providing additional training, could help USCIS better ensure that IMBRA requirements are properly implemented and that data on petitions are collected and maintained consistent with USCIS procedures. We are making four recommendations to improve the implementation of IMBRA. To better ensure the consistent application of IMBRA waiver requirements and adjudication of I-129F petitions, we recommend that the Director of USCIS set a target time frame for completing the agency’s review of revisions to the I-129F petition. To ensure that fiancé(e)s and spouses applying for K visas receive and understand the information to be provided to them under IMBRA and that consular officers adhere to documentation guidance in the FAM, we recommend that the Secretary of State incorporate the FAM’s IMBRA- related documentation requirements into the Foreign Service Institute’s training curriculum for entry-level and midlevel consular officers. To ensure data required by IMBRA are collected, maintained, and reliable, we recommend that the Director of USCIS take the following two actions: ensure that IMBRA-required data elements will be collected in an automated manner with the release of the electronic I-129F petition, and provide additional training to officers who adjudicate I-129F petitions on the IMBRA-related requirements in the adjudication process. We provided a draft of this report to the Secretaries of Homeland Security and State, and the Attorney General. DHS and State provided written responses, which are reproduced in full in appendixes III and IV, respectively. DHS concurred with our three recommendations to that agency and described actions under way or planned to address them. With regard to our first recommendation to DHS that USCIS set a target time frame for completing the agency’s review and revisions to the I-129F petition, DHS concurred and stated that USCIS has drafted the revised Form 129F and instructions and plans to distribute them for internal review in December 2014. DHS stated that once the internal review is completed, the revised form and instructions will undergo a public comment period and the I-129F standard operating procedures will be updated. DHS estimated a completion date of September 30, 2015. With regard to our second recommendation to DHS that USCIS ensure that IMBRA-required data elements will be collected in an automated manner with the release of the electronic I-129F petition, DHS concurred and stated that USCIS will identify all data that will be collected and estimated a completion date of December 31, 2016. With regard to our third recommendation to DHS that USCIS provide additional training to officers who adjudicate I-129F petitions on the IMBRA-related requirements in the adjudication process, DHS concurred and stated that USCIS has developed a training presentation for officers on IMBRA-related requirements and that all officers adjudicating the I-129F will be required to complete the course by the end of January 2015. These actions should address the intent of our recommendations. In addition, State concurred with our recommendation that State incorporate the FAM’s IMBRA-related documentation requirements in the Foreign Service Institute’s training curriculum for entry-level and midlevel consular officers. State noted that additional IMBRA-related training would be provided to entry-level and midlevel consular officers. Specifically, State indicated that the Foreign Service Institute’s 6-week mandatory training for entry-level consular adjudicators, and two courses for midlevel consular officers would be expanded to explicitly emphasize IMBRA-related requirements. When implemented, these steps should help ensure that K visa beneficiaries receive and understand information available to them under IMBRA. Technical comments provided by DHS, State, and DOJ were incorporated, as appropriate. We are sending copies of this report to the Secretaries of Homeland Security and State, the Attorney General, and appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8777 or gamblerr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix IV. In addition to the contact named above, Kathryn Bernet (Assistant Director), Frances Cook, Monica Kelly, Connor Kincaid, Stanley Kostlya, Thomas Lombardi, Linda S. Miller, Jessica Orr, Michelle Woods, and Jim Ungavarsky made significant contributions to this work.
Enacted in January 2006, IMBRA was passed by Congress to address reports of domestic violence and abuse of foreign beneficiaries married or engaged to U.S. citizens who have petitioned for them to enter the United States on a K visa. As amended, IMBRA requires that the federal government collect and provide to beneficiaries information about petitioners' prior K visa petitions and criminal histories. USCIS is responsible for collecting this information and adjudicating petitions, State is responsible for disclosing information to beneficiaries, and DOJ is authorized to enforce IMBRA. The Violence Against Women Reauthorization Act of 2013 mandates that GAO report on IMBRA implementation. This report examines the extent to which (1) DHS, State, and DOJ have implemented processes to ensure compliance with IMBRA, and (2) DHS collects and maintains reliable data to manage the K visa process. GAO analyzed IMBRA, USCIS, and State policies, procedures, and guidance, and K visa petition data from March 2012 through March 2014. GAO also interviewed USCIS, State, and DOJ officials regarding their agencies' implementation of IMBRA. The Departments of Homeland Security (DHS), Justice (DOJ), and State (State) have processes to help ensure compliance with the International Marriage Broker Regulation Act of 2005 (IMBRA), as amended, but State could better document information on IMBRA disclosures. Specifically, consistent with IMBRA, DHS's U.S. Citizenship and Immigration Services (USCIS) collects information from petitioners—U.S. citizens who apply to bring noncitizen fiancé(e)s, spouses, and their children (beneficiaries) into the country—through I-129F petitions for K visas. DOJ is responsible for pursuing federal civil and criminal penalties for IMBRA violations. State has guidance on processes for providing IMBRA information to beneficiaries (referred to as disclosures), such as a pamphlet outlining for beneficiaries the K visa process and legal rights and resources available to immigrant crime victims. Specifically, State's guidance requires consular officers to document within case notes in State's database whether they made all of the IMBRA-required disclosures to the beneficiary during the visa interview. However, GAO's review of a sample of K visa applications showed that in about 52 percent of interview case notes (76 of 147), consular officers did not document that they had provided beneficiaries the IMBRA pamphlet as required by State's guidance. In October 2014, State drafted a guidance cable for consular officers on IMBRA implementation, including a reminder to follow guidance regarding IMBRA documentation. State's consular officer training courses, however, do not cover IMBRA-related documentation procedures outlined in its guidance. Incorporating IMBRA-related documentation requirements into training courses could help State better ensure that consular officers are aware of the requirements for documenting IMBRA disclosures. Consistent with IMBRA, USCIS is to collect and maintain data on, among other things, eight elements in the K visa process for GAO reporting purposes; however, six of the eight elements are either not reliable or are not collected or maintained in a reportable (i.e., electronic) format. Thus, these elements were not readily available for GAO's review. For example, USCIS is to collect and maintain data on I-129F petitions where the petitioner had one or more criminal convictions. This information is maintained in hard copy in the petition file and thus was not readily available for GAO's review. USCIS has begun planning to electronically capture I-129F petition data under the agency's overarching transformation to an electronic immigration benefits system. However, this transformation has faced significant delays, and as of September 2014, the electronic I-129F petition design requirements have not been finalized. Consistent with federal internal control standards, ensuring that all of the IMBRA-related requirements will be captured with the release of the I-129F electronic petition would better position USCIS to collect and maintain complete data on petitioners for reporting purposes and management oversight. Further, USCIS officers have not consistently adjudicated I-129F petitions or recorded complete and accurate data. Specifically, GAO found that USCIS's data are not reliable for determining the number of I-129F petitions filed by persons who have previously filed I-129F petitions for a fiancé(e) or spouse or that required IMBRA waivers because of, among other things, officer error in recording data on petitions. Additional training for officers could help USCIS better ensure its officers are aware of IMBRA requirements to assist them in maintaining petitions data consistent with IMBRA. GAO recommends that State provide training to consular officers on IMBRA documentation requirements. GAO also recommends, among other things, that USCIS ensure that all IMBRA-related data will be captured with the planned electronic release of the I-129F petition and that its officers receive additional training on IMBRA requirements. State and DHS concurred with our recommendations.
In November 1994, the Office of the Director of Defense Procurement initiated the SPS program to acquire and deploy a single automated system to perform all contract-management-related functions for all DOD organizations. At that time, life-cycle costs were estimated to be about $3 billion over a 10-year period. From 1994 to 1996, the department defined SPS requirements and solicited commercially available vendor products for satisfying these requirements. Subsequently, in April 1997, the department awarded a contract to American Management Systems (AMS), Incorporated, to (1) use AMS’s commercially available contract management system as the foundation for SPS, (2) modify this commercial product as necessary to meet DOD requirements, and (3) perform related services. The department also directed the contractor to deliver functionality for the system in four incremental releases. The department later increased the number of releases across which this functionality would be delivered to seven, reduced the size of the increments, and allowed certain more critical functionality to be delivered sooner (see table 1 for proposed SPS functionality by increment). Since our report of July 2001, DOD has revised its plans. According to the SPS program manager, current plans no longer include increments 6 and 7 or releases 5.0 and 5.1. Instead, release 4.2 (increment 5) will include at least three, but not more than seven, subreleases. At this time, only the first of the potentially seven 4.2 subreleases is under contract. This subrelease is scheduled for delivery in April 2002, with deployment to the Army and the Defense Logistics Agency scheduled for June 2002. Based on the original delivery date, release 4.2 is about one year overdue. The department reports that it has yet to define the requirements to be included within the remaining 4.2 subreleases, and has not executed any contract task orders for these subreleases. According to SPS officials, they will decide later this year whether to invest in these additional releases. As of December 2001, the department reported that it had deployed four SPS releases to over 777 locations. The Director of Defense Procurement (DDP) has responsibility for the SPS program, and the CIO is the milestone decision authority for SPS because the program is classified as a major Defense acquisition. Our July 2001 report detailed program problems and investment management weaknesses. To address these weaknesses, we recommended, among other things, that the department report on the lessons to be learned from its SPS experience for the benefit of future system acquisitions. Similarly, other reviews of the program commissioned by the department in the wake of our review raised similar concerns and identified other problems and management weaknesses. The findings from our report are summarized below in two major categories: lack of economic justification for the program and inability to meet program commitments. We also summarize the findings of the other studies. The Clinger-Cohen Act of 1996, OMB guidance, DOD policy, and practices of leading organizations provide an effective framework for managing information technology investments, not just when a program is initiated, but continuously throughout the life of the program. Together, they provide for (1) economically justifying proposed projects on the basis of reliable analyses of expected life-cycle costs, benefits, and risks; and (2) using these analyses throughout a project’s life-cycle as the basis for investment selection, control, and evaluation decisionmaking, and doing so for large projects (to the maximum extent practical) by dividing them into a series of smaller, incremental subprojects or releases and individually justifying investment in each separate increment on the basis of costs, benefits, and risks. The department had not met these investment management tenets for SPS. First, the latest economic analysis for the program—dated January 2000— was not based on reliable estimates because most of the cost estimates in the 2000 economic analysis were estimates carried forward from the April 1997 analysis (adjusted for inflation). Only the cost estimates being funded and managed by the SPS program office, which were 13 percent of the total estimated life-cycle cost in the analysis, were updated in 2000 to reflect more current contract estimates and actual expenditures/ obligations for fiscal years 1995 through 1999. Moreover, the military services, which share funding responsibility with the SPS program office for implementing the program, questioned the reliability of these cost estimates. However, this uncertainty was not reflected in the economic analysis using any type of sensitivity analysis. A sensitivity analysis would have disclosed for decisionmakers the investment risk being assumed by relying on the estimates presented in the economic analysis. Moreover, the latest economic analysis (January 2000) was outdated because it did not reflect the program’s current status and known problems and risks. For instance, this analysis was based on a program scope and associated costs and benefits that anticipated four software releases. However, as mentioned previously, the program now consists of five releases, and subreleases within releases, in order to accommodate changes in SPS requirements. Estimates of the full costs, benefits, and risks relating to this additional release and its subreleases were not part of the 2000 economic analysis. Also, this analysis did not fully recognize actual and expected delays in meeting SPS’s full operational capability milestone, which had been slipped by 3½ years and DOD officials say that further delays are currently expected. Such delays not only increase the system acquisition costs but also postpone, and thus reduce, accrual of system benefits. Further, several DOD components are now questioning whether they will even deploy the software, which would further reduce SPS’s cost effectiveness calculations in the 2000 economic analysis. Second, the department had not used these analyses as the basis for deciding whether to continue to invest in the program. The latest economic analysis showed that SPS was not a cost-beneficial investment because the estimated benefits to be realized did not exceed estimated program costs. In fact, the 2000 analysis showed estimated costs of $3.7 billion and estimated benefits of $1.4 billion, which was a recovery of only 37 percent of costs. According to the former SPS program manager, this analysis was not used to manage the program and there was no DOD requirement for updating an economic analysis when changes to the program occurred. Third, DOD had not made its investment decisions incrementally as required by the Clinger-Cohen Act and OMB guidance. That is, although the department is planning to acquire and implement SPS as a series of five increments, it has not made decisions about whether to invest in each release on the basis of the release’s expected return on investment, as well as whether prior releases were actually achieving return-on-investment expectations. In fact, for the four increments that have been deployed, the department had not validated whether the increments were providing promised benefits and was not accounting for the costs associated with each increment so that it could even determine actual return on investment. Instead, the department had treated investment in this program as one, monolithic investment decision, justified by a single, “all-or-nothing” economic analysis. Our work has shown that it is difficult to estimate, with any degree of accuracy, cost and schedule estimates for many increments to be delivered over many years because later increments are not well understood or defined. Also, these estimates are subject to change based on actual program experiences and changing requirements. This “all-or- nothing” approach to investing in large system acquisitions, like SPS, has repeatedly proven to be ineffective across the federal government, resulting in huge sums being invested in systems that do not provide commensurate benefits. Measuring progress against program commitments is closely aligned with economically justifying information-technology investments, and is equally important to ensuring effective investment management. The Clinger- Cohen Act, OMB guidance, DOD policy, and practices of leading organizations provide for making and using such measurements as part of informed investment decisionmaking. DOD had not met key commitments and was uncertain whether it was meeting other commitments because it was not measuring them. (See table 2 for a summary of the department’s progress against commitments.) two analyses, such as the number and dollar value of estimated benefits, and the information gathered did not map to the 22 benefit types listed in the 1997 economic analysis. Instead, the study collected subjective judgments (perceptions) that were not based on predefined performance metrics for SPS capabilities and impacts. Thus, the department was not measuring SPS against its promised benefits. The former program manager told us that knowing whether SPS was producing value and meeting commitments was not the program office’s objective because there was no departmental requirement to do so. Rather, the objective was simply to acquire and deploy the system. Similarly, CIO officials told us that the department was not validating whether deployed releases of SPS were producing benefits because there was no DOD requirement to do so and no metrics had been defined for such validation. However, the Clinger-Cohen Act of 1996 and OMB guidance emphasize the need to have investment management processes and information to help ensure that information-technology projects are being implemented at acceptable costs and within reasonable and expected time frames and that they are contributing to tangible, observable improvements in mission performance (i.e., that projects are meeting the cost, schedule, and performance commitments upon which their approval was justified). For programs such as SPS, DOD required this cost, schedule, and performance information to be reported quarterly to ensure that programs did not deviate significantly from expectations. In effect, these requirements and guidance recognize that one cannot manage what one cannot measure. Shortly after receiving our draft report for comment, the department initiated several studies to determine the program’s current status, assess program risks, and identify actions to improve the program. These studies focused on such areas as program costs and benefits, planned commitments, requirements management, program office structure, and systems acceptance testing. Consistent with our findings and recommendations, these studies identified the need to establish performance metrics that will enable the department to measure the program’s performance and tie these metrics to benefits and customer satisfaction; clearly define organizational accountability for the program; provide training for all new software releases; standardize the underlying business processes and rules that the system is to support; acquire the software source code; and address open customer concerns to ensure user satisfaction. In addition, the department found other program management concerns not directly within the scope of our review, such as the need to appropriately staff the program management office with sufficient resources and address the current lack of technical expertise in areas such as contracting, software engineering, testing, and configuration management; modify the existing contract to recognize that the system does not employ a commercial-off-the-shelf software product, but rather is based on customized software product; establish DOD-controlled requirements management and acceptance testing processes and practices that are rigorous and disciplined; and assess the continued viability of the existing contractor. To address the many weaknesses in the SPS program, we made several recommendations in our July 2001 report. Specifically, we recommended that (1) investment in future releases or major enhancements to the system be made conditional on the department first demonstrating that the system is producing benefits that exceed costs; (2) future investment decisions, including those regarding operations and maintenance, be based on complete and reliable economic justifications; (3) any analysis produced to justify further investment in the program be validated by the Director, Program Analysis and Evaluation; (4) the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence (C3I) clarify organizational accountability and responsibility for measuring SPS program against commitments and to ensure that these responsibilities are met; (5) program officials take the necessary actions to determine the current state of progress against program commitments; and (6) the Assistant Secretary of Defense for C3I report by October 31, 2001, to the Secretary of Defense and to DOD’s relevant congressional committees on lessons learned from the SPS investment management experience, including what actions will be taken to prevent a recurrence of this experience on other system acquisition programs. DOD’s reaction to our report was mixed. In official comments on a draft of our report, the Deputy CIO generally disagreed with our recommendations, noting that they would delay development and deployment of SPS. Since that time, however, the department has acknowledged its SPS problems and begun taking steps to address some of them. In particular, it has done the following. The department has established and communicated to applicable DOD organizations the program’s chain-of-command and defined each participating organization’s responsibilities. For example, the Joint Requirements Board was delegated the responsibility for working with the program users to define and reach agreement on the needed functionality for each software release. The department has restructured the program office and assigned additional staff, including individuals with expertise in the areas of contracting, software engineering, configuration management, and testing. However, according to the current program manager, additional critical resources are needed, such as two computer information technology specialists and three contracting experts. It has renegotiated certain contract provisions to assume greater responsibility and accountability for the requirements management and testing activities. For example, DOD, rather than the contractor, is now responsible for writing the test plans. However, additional contract changes remain to be addressed, such as training, help-desk structure, facilities support, and system operations and maintenance. The department has designated a user-satisfaction manager for the program and defined forums and approaches intended to better engage users. It has established a new testing process, whereby program officials now develop the test plans and maintain control over all software testing performed. In addition, SPS officials have stated their intention to prepare analyses for future program activities beyond those already under contract, such as the acquisition of additional system releases, and use these analyses in deciding whether to continue to deploy SPS or pursue another alternative; define system performance metrics and use these metrics to assess the extent to which benefits have been realized from already deployed system releases; and report on lessons learned from its SPS experience to the Secretary of Defense and relevant congressional committees. The department’s actions and intentions are positive steps and consistent with our recommendations. However, much remains to be accomplished. In particular, the department has yet to implement our recommendations aimed at ensuring that (1) future releases or major enhancements to the system be made conditional on first demonstrating that the system is producing benefits that exceed costs and (2) future investment decisions, including those regarding operations and maintenance, be based on a complete and reliable economic justification. We also remain concerned about the future of SPS for several additional reasons. First, definitive plans for how and when to justify future system releases or major enhancements to existing releases do not yet exist. Second, SPS officials told us that release 4.2, which is currently under contract, may be expanded to include functionality that was envisioned for releases 5.0 and 5.1. Including such additional functionality could compound existing problems and increase program costs. Third, not all defense components have agreed to adopt SPS. For example, the Air Force has not committed to deploying the software; the National Imagery and Mapping Agency, the Defense Advanced Research Projects Agency, and the Defense Intelligence Agency have not yet decided to use SPS; and the DOD Education Agency has already adopted another system because it deemed SPS too expensive.
The Department of Defense (DOD) lacks management control of the Standard Procurement System (SPS). DOD has not (1) ensured that accountability and responsibility for measuring progress against commitments are clearly understood, performed, and reported; (2) demonstrated, on the basis of reliable data and credible analysis, that the proposed system solution will produce economic benefits commensurate with costs; (3) used data on progress against project cost, schedule, and performance commitments throughout a project's life cycle to make investment decisions; and (4) divided this large project into a series of incremental investment decisions to spread the risks over smaller, more manageable components. GAO found that DOD lacks the basic information needed to make informed decisions on how to proceed with the project. Nevertheless, DOD continues to push forward in acquiring and deploying additional versions of SPS. This testimony summarizes a July report (GAO-01-682).
Floods can result in the loss of lives, extensive damage to property and agriculture, and large-scale disruptions to business and infrastructure, such as transportation and water and sewer services. The National Oceanic and Atmospheric Administration estimates that floods cause about 140 deaths in the United States each year, and the Army Corps of Engineers estimates that floods cause $6 billion in average annual losses. Congress established NFIP in the National Flood Insurance Act of 1968 to provide policyholders with some insurance coverage for flood damage as an alternative to disaster assistance, and to try to reduce the escalating costs of repairing flood damage.subsequently modified by various amendments, including the Flood Disaster Protection Act of 1973 (1973 Act) and the National Flood The program was Insurance Reform Act of 1994. And most recently, NFIP was amended by the Biggert-Waters Flood Insurance Reform Act of 2012. The 1973 Act added certain requirements that, according to FEMA officials, were intended to encourage community participation in NFIP. Specifically, as a condition of future federal financial assistance, communities are required to participate in NFIP and to adopt adequate floodplain ordinances with effective enforcement provisions consistent with federal standards in order to reduce or avoid future flood losses. Figure 1, which shows the location of U.S. Indian reservations and major flood disaster declarations over a 25-year period, indicates that many Indian tribes reside in areas that have experienced multiple floods. The 1973 Act denied direct federal financial assistance and financing by private lending institutions regulated by federal regulators for acquisition or construction purposes in participating communities where flood insurance was available unless the property was covered by flood insurance. Prior to the 1973 Act, the purchase of flood insurance had been voluntary. However, this mandatory purchase requirement, further amended by the National Flood Insurance Reform Act of 1994, effectively requires owners of property to obtain flood insurance if they are located in a Special Flood Hazard Area (SFHA) within a community participating in NFIP and obtain a mortgage from a federally regulated lending institution or a federal agency lender or receive direct federal financial assistance for acquisition or construction purposes. The mandatory purchase requirement applies to secured mortgage loans from financial institutions such as banks, savings and loan associations, savings banks, and credit unions that are supervised by federal agencies, including the Federal Deposit Insurance Corporation and the Office of the Comptroller of the Currency. It also applies to all mortgage loans secured by real estate on which a building is constructed in an SFHA for which flood insurance is available if purchased by Fannie Mae and Freddie Mac in the secondary mortgage market. Loans and grants through financial assistance programs from agencies such as the Federal Housing Administration and the Department of Veterans Affairs are also affected. The requirement also extends to several federal programs that assist Indian tribes. For example, recipients of funds from BIA’s Housing Improvement Program, HUD’s Indian Housing Block Grant (IHBG) and Indian Community Development Block Grant (ICDBG), and several USDA Rural Development loan programs are required to purchase flood insurance if an assisted structure is in an SFHA. Finally, individuals in SFHAs who receive federal disaster assistance after September 23, 1994, for flood disaster losses to real or personal property are required, as a condition for receiving future disaster assistance, to purchase and maintain flood insurance coverage on the property. According to FEMA, in December 2012 the average NFIP policy cost about $600 per year, with policies in SFHAs typically costing more and some policies outside SFHAs costing less. FEMA identifies and maps flood-prone areas throughout the United States and its territories that are eligible to participate in NFIP. According to a Congressional Research Service (CRS) report on NFIP, FEMA also makes flood hazard information available on its website for viewing or purchasing. The report notes that FEMA works with communities to develop new flood hazard data as part of a flood insurance study, issues public notification about maps, and engages in education and outreach to help ensure that community leaders and residents understand the mapping process and the appropriate use of flood maps. The CRS report further notes that reliable flood risk data, including updated flood maps, and educating residents about flood risk, contribute to mitigating future flood losses. Most areas of flood hazard are commonly identified on Flood Insurance Rate Maps (FIRM), and areas not yet identified by a FIRM may be mapped on Flood Hazard Boundary Maps. Several areas of flood hazard are identified on these maps, one of which is the SFHA. The SFHA is a high-risk area defined as any land that would be inundated by a flood having a 1 percent chance of occurring in a given year (base flood); this is the equivalent of a 26 percent chance of flooding over a 30- year mortgage. According to FEMA, the SFHA constitutes a reasonable compromise between the need for building restrictions to minimize potential loss of life and property and the economic benefits to be derived from floodplain development. Development may take place within an SFHA as long as it complies with local floodplain management ordinances, which must meet minimum federal requirements. Flood insurance is required for insurable structures within high-risk areas to protect federal financial investments and assistance used for acquisition or construction purposes within communities participating in NFIP. In July 2012, Congress passed the Biggert-Waters Flood Insurance Reform Act of 2012. The act extends NFIP for 5 years and makes reforms to the program that include (1) phasing out subsidies for many properties, (2) raising the cap on annual premium increases for other policies from 10 to 20 percent, (3) clarifying that certain multifamily properties are eligible for NFIP policies, (4) imposing minimum deductibles for flood claims, (5) requiring NFIP to establish a reserve fund, and (6) establishing a technical mapping advisory council to deal with map modernization issues. The act also calls for an assessment by FEMA and GAO, separately, of options and strategies for privatizing NFIP in the future and authorizes FEMA to pursue private risk-management initiatives to determine the capacity of private insurers and markets to assist communities in managing the full range of financial risks associated with flooding. Because of its subsidized premium rates and catastrophic hurricane- related floods in recent years, NFIP has accrued a substantial debt that stood at nearly $18 billion as of October 2012. We previously reported that NFIP was designed to pay operating expenses and flood insurance claims with premiums collected on flood insurance policies rather than with tax dollars, and that FEMA had statutory authority to borrow funds from Treasury to keep NFIP solvent in years when losses were high. We noted that by design NFIP was not actuarially sound, because Congress authorized subsidized insurance rates for policies covering certain structures to encourage communities to join the program. Since 2000, NFIP has experienced several catastrophic loss years (years with $1 billion or more in losses). These years include 2001, 2004, 2005, and 2008. By the end of 2012, NFIP is expected to have experienced another catastrophic loss year because of the enormous damage from the October storm known as Superstorm Sandy. Under FEMA’s NFIP regulations, a community is defined as any state or area or political subdivision thereof or any Indian tribe or authorized tribal organization, or Alaska Native Village or authorized native organization, that has the authority to adopt and enforce floodplain management ordinances for the area under its jurisdiction. Indian tribes, authorized tribal organizations, Alaska Native villages, and authorized native organizations that have land use authority are considered communities by NFIP and can join the program even if no flood hazard map exists that covers all tribal lands. See 44 C.F.R. § 206.110(k). where the community is participating in NFIP, unless the property is covered by flood insurance. Participating communities can receive discounts on flood insurance if they establish floodplain management programs that go beyond the minimum requirements of NFIP. FEMA can suspend communities that do not comply with the program and communities can withdraw from the program (both with sanctions). Currently, more than 21,000 communities participate in NFIP. Communities participating in NFIP do so as part of either the regular or emergency program. A community participating in the regular NFIP program is usually provided with a FIRM and a flood insurance study. As part of their agreement to participate in NFIP, communities adopt and enforce floodplain management ordinances and FIRMs. If communities do not adopt and enforce these ordinances, they can be placed on probation or suspended from the program. However, such actions take place only after FEMA has taken steps to help the community become compliant. The NFIP emergency program is the initial phase of a community’s participation in NFIP and was designed to provide a limited amount of flood insurance. A community participating in the emergency program either does not have an identified and mapped flood hazard or has been provided with a Flood Hazard Boundary Map, and the community is required to adopt limited floodplain management standards to control future use of its floodplain. According to FEMA, fewer than 3 percent of the more than 21,000 communities participating in NFIP are in the emergency program. The federal government has consistently recognized Indian tribes as distinct, independent political communities with the inherent powers of a limited sovereignty that has never been extinguished. As of August 2012, there were 566 federally recognized tribes—341 in the contiguous 48 To help manage tribal affairs, tribes have states and 225 in Alaska.formed governments and subsidiaries of tribal governments, including schools, housing, health, and other types of corporations. The United States has a trust responsibility to recognized Indian tribes and maintains a government-to-government relationship with them. Tribal lands vary dramatically in size, demographics, and location. They range from the Navajo Nation, which consists of about 27,000 square miles across portions of Arizona, New Mexico, and Utah, to some tribal land areas in California of less than 1 square mile. Over 176,000 American Indians live on the Navajo reservation, while some other tribal lands have fewer than 50 Indian residents. Some Indian reservations have a mixture of Indian and non-Indian residents. Most tribal lands are rural or remote, although some are near metropolitan areas. We have reported in the past that some tribes are landless. See table 4 in GAO, Indian Issues: BLM’s Program for Issuing Individual Indian Allotments on Public Lands Is No Longer Viable, GAO-07-23R (Washington, D.C.: Oct. 20, 2006). See Brendale v. Confederated Tribes and Bands of Yakima Indian Nations, 492 U.S. 408 (1989). reservation boundaries, have been inherited or purchased by non-Indians and Indian tribes generally lack jurisdiction over land owned by non- Indians. In some cases, although a tribe is not an NFIP participant, the tribe may be located in another community that participates in NFIP. A proposed amendment to the Robert T. Stafford Disaster Relief and Emergency Assistance Act would allow Indian tribes to request a major disaster declaration by the President. The proposed amendment would provide Indian tribes with this authority so that they would not be required to rely on assistance through a presidential declaration requested by the Federal officials with whom we spoke generally viewed state or locality.this amendment as a positive action in the interest of tribes. For example, USDA Rural Development officials said that this authority would make it easier for tribes to access disaster relief resources. Representatives of several tribes we interviewed said that such a change would open up direct communication between the federal government and Indian tribes. However, questions remained about how the proposed amendment would be implemented. It is not yet clear how the proposed Stafford Act amendment would affect Indian tribes’ willingness to participate in NFIP. According to FEMA, as of August 2012, 37 out of 566 federally recognized tribes nationwide—roughly 7 percent—were participating in NFIP (see table 1). The number of policies for each tribe ranged from 1 to 175, and 14 participating tribes had no individual policies. Across all participating Indian tribes, 414 policies were in place, accounting for less than 1 hundredth of a percent of all NFIP policies. FEMA’s data also show that, among communities that have received flood hazard maps but are not participating in NFIP, there are 46 tribal communities. Federal agency officials, tribes, and others described several factors that affect whether tribes purchase flood insurance through NFIP or other programs. FEMA does limited mapping in tribal communities. Flood maps show communities and homeowners the level of flood risk they face. According to FEMA, as of October 2012, 78 tribal communities had received flood hazard maps.Association, which provides FEMA flood mapping data to mortgage lenders and insurers, told us that some tribes that were not participating in NFIP had not been mapped and that because they did not know their flood risk, they likely did not see the advantages to NFIP participation. In discussing the agency’s mapping efforts, FEMA officials explained that tribal communities generally included small rural areas that were not a high priority for the agency. The officials said that FEMA had focused its mapping efforts on heavily populated urban and coastal areas with a high risk of flooding. The officials also noted that because of the tribes’ sovereignty, the agency needed permission to enter tribal lands and conduct mapping activities and that such permission could be difficult to obtain. Additionally, they were concerned that tribes might not grant Representatives of the National Flood Determination permission for FEMA to publish a map if tribal borders were in dispute. They explained that if FEMA could not overcome these challenges, it might exclude the tribal area from the watershed map. However, they did not provide us with any specific examples of maps with such exclusions. Representatives of a Washington tribe that received its flood hazard maps from FEMA in 2004 told us the tribe had participated in NFIP since 1997, 7 years prior to receiving the maps. The tribe initially participated in NFIP’s emergency program and then became a regular participant once its maps were finalized. They explained that the tribe had largely been motivated by a need to clarify its jurisdictional (land and water) area, which flood mapping allowed them to do. Further, they said that joining NFIP had provided an opportunity to obtain flood insurance as a tribal community instead of participating as part of the surrounding county, supporting the tribe’s interest in self-determination. vulnerable to flooding from several sources, and the representatives noted that it had long been proactive in disaster mitigation efforts as a whole. Emergency officials from another tribe told us that approximately 10 percent of their reservation was mapped and that the mapping had been done by the Army Corps of Engineers. They said their understanding was that until recently, FEMA did not conduct flood mapping on federal reservations. As previously noted, FEMA assigns a unique community identification number to each community listed in its NFIP Community Information System database. The actions of a specific community under NFIP directly impact the availability and cost of NFIP policies for its residents. For example, a community’s actions could result in its residents receiving NFIP policy discounts. For more on Indian self-determination, see GAO-10-326. 9 were in the 1-year “opportunity period” for addressing any identified flood hazards and joining NFIP, and 2 had been determined to have some flood risk but were not participating. One representative of a participating Washington tribe suggested that tribes that are most at risk for flooding should be given priority in efforts to encourage tribal participation in NFIP. The tribal representative said that expecting those not highly vulnerable to flooding to purchase costly flood insurance when they have other priorities would be difficult. When we spoke with an emergency management official from a nonparticipating Wisconsin tribe, he agreed that because the tribe had not experienced a major flood, there was a general lack of urgency on the part of tribal leadership about NFIP participation. Many tribes lack the resources or administrative capacity to join NFIP. FEMA officials told us that affordability affected tribes’ participation in NFIP as it did other low-income communities and individuals. The officials also told us that they had seen slow to no insurance policy growth in areas of the country where the economy was not performing well. A representative of the National Flood Determination Association agreed that the cost of coverage could limit tribes’ participation, noting that there is a general lack of funding for mapping. Further, he said that many rural communities, including tribal communities, were not in favor of adopting land use regulations and did not have the resources to adopt and implement them. Representatives of the participating Washington tribe with the highest number of individual policies as of August 2012 acknowledged that NFIP participation was administratively burdensome and costly. In particular, they explained that developing flood damage reduction ordinances and then implementing the ordinances required dedicated staff that not all tribes have. In general, they acknowledged that many tribes lacked the resources that this tribe had to pursue NFIP participation. Similarly, a Wisconsin tribal official said that pursuing NFIP participation could be especially challenging for tribes that lacked emergency, planning, or zoning functions and tribes that may not even have developed building codes on their lands. He emphasized that such limitations should be taken into account in examining why tribes may not be participating in NFIP. Many tribes view affordability as a significant issue for their members in purchasing NFIP policies. Many tribal representatives said that affordability would be an issue for tribal members. For instance, representatives of an Oklahoma tribe told us that while affordability would not affect the tribe’s decision to participate in the program, because the tribe would use NFIP to insure its government and commercial buildings, it would be a factor for individual tribal members. Specifically, they explained that paying flood insurance premiums would be challenging for individuals who already lacked the resources to afford homeowner, renter, and automobile insurance. Representatives of other tribes agreed that NFIP premiums would be costly for the members of their tribes. Emergency management representatives of a large western tribe told us they were not aware of any tribal members who had flood insurance on their homes and that even homeowner insurance coverage was rare. They explained that the average annual household income on the reservation was between $12,000 and $15,000 and that unemployment was at least 28 percent. A tribal housing official from Alaska whose members can participate in NFIP through the surrounding borough told us that he had found that most members of his tribe dropped their homeowners insurance as soon as their homes were paid off and that he expected they would do the same with required flood insurance, which An official for another Oklahoma tribe can cost more than $1,000 a year.that was participating in NFIP but had no active individual policies said he did not believe flood insurance was a priority for members of his tribe, whose average annual household income was $6,000. Figure 2 shows a Native Alaska fishing village that has experienced flooding but whose residents, according to the tribe’s housing director, have not purchased NFIP flood insurance due to the high cost. Unique Indian issues also impact tribal participation in NFIP. As previously noted, all but one of the tribes in Alaska lack a reservation. Because the tribes lack jurisdiction to enact and enforce land use ordinances over the land where they reside, they cannot directly participate as communities in NFIP. In many cases, the tribes are co- located with other government entities that may participate in NFIP, such as cities and boroughs, and their members may access NFIP through those other entities. Alaska state officials told us that an estimated 66 percent of the Alaska Native population could participate in NFIP because they lived in a city or borough that participated in the program. Based on data we compiled and analyzed, 58 of 225 Alaska tribes were co-located with a participating community (see table 2). Tribes in Oklahoma and elsewhere that do not have reservations—as well as tribes with reservations—face similar challenges in adopting and enforcing land use ordinances because they lack jurisdiction over certain land. Tribes with reservations do not generally have authority to adopt and enforce land use ordinances for land within the reservation’s boundaries owned by non-Indians. Likewise, representatives of an Oklahoma tribe told us that its lack of participation in NFIP was due in part to a reluctance to face possible sanctions because of the tribe’s limited ability to enact and enforce ordinances for land owned by non- Indians. For example, they explained that if a home that had experienced repeated flooding was located on land where the tribe had limited jurisdiction, the tribe could not take action to mitigate future flood damage without the owner’s permission. They said they were aware that NFIP does not have a workaround for such circumstances. The tribe had chosen to insure all tribal structures and vehicles under a private policy. In addition, representatives of several tribes explained that tribal structures differ from other local government structures but that FEMA did not take those differences into account, making participation difficult for some tribes. For example, representatives of a participating Washington tribe told us that in preparing its multihazard mitigation plan for FEMA approval, the tribe realized that the plan template had been created for states, as it called for input from counties within the jurisdiction. Instead of counties, the tribe had to substitute less specific geographic areas within the tribal community. The same tribe was participating in NFIP’s Community Rating System program, which allows communities to receive discounts on policies for their residents based on floodplain management actions the community takes beyond NFIP’s minimum requirements. The representatives explained that NFIP also lacked a tribal template for the Community Rating System program, which they said would facilitate the tribe’s participation. Without flood hazard maps, tribal communities, including those that may be in areas with a higher risk of flooding, may not be sufficiently aware of their flood risk. Tribes also may be reluctant to pursue NFIP participation if they are uncertain about whether they would qualify and could meet the program’s requirements. Further, those with fewer resources and less administrative capacity may be less proactive in requesting that FEMA map their communities, even though they may be vulnerable to floods. FEMA’s outreach to tribes in the last few years has largely consisted of emergency management and homeland security training for tribal officials through its Emergency Management Institute (EMI), direct technical assistance to tribes in preparing their multihazard mitigation plans, and nationwide outreach for NFIP through its regional offices and the NFIP FloodSmart marketing campaign. FEMA officials told us that the agency helped to educate tribal officials about NFIP and floodplain management through courses offered by EMI under a mitigation curriculum that includes courses for floodplain managers on their roles and responsibilities, flood insurance, and NFIP rules and regulations. FEMA has also developed an emergency management tribal curriculum to collaborate with tribal governments in building emergency management capability and partnerships to ensure continued survival of tribal nations and communities. To some extent, FEMA’s efforts have helped some tribes better understand the flood hazards that they face. According to FEMA officials, more than 2,000 members from more than 300 Indian tribes have taken courses through EMI. The officials added that each regional office had a floodplain management specialist as the NFIP point of contact for Indian tribes. Officials from two tribes told us that they had participated in EMI training. One tribal emergency management official in Wisconsin told us that he participated in training with an EMI tribal curriculum group that communicated monthly. FEMA officials also described the direct technical assistance that they provided to tribes that were preparing hazard mitigation plans. According to FEMA, developing the plans involves identifying the tribe’s critical infrastructure, major risks and vulnerabilities, and actions to reduce those risks and vulnerabilities for various types of disasters, including floods. This assistance provides tribes with an opportunity to learn about the risks that their individual communities may face. Representatives from several tribes told us that they had completed approved mitigation plans or were in the process of completing their plans. Two of the tribes with whom we spoke had their plans approved in 2010. FEMA FloodSmart officials explained that they tried to reach communities nationwide with the FloodSmart campaign, but that they targeted those communities that were most at risk for flooding. Among these communities, they focused on urban areas which had a higher concentration of potential flood insurance buyers than rural areas. They explained that FloodSmart used a tiered marketing strategy that was based on a number of factors that point to a high potential return on investment of federal dollars, including: (1) flood insurance policy purchase history, (2) potential flood risk as determined by volume of SFHA properties, (3) flood event history, (4) volume of structures, and (5) media cost. As such, rural areas, including Indian areas, generally received lower priority. FloodSmart officials told us that since February 2007, a total of 671,000 acquisition-based direct mail pieces had been sent to approximately 383,000 distinct household addresses within zip codes that intersected with Indian reservations. Moreover, according to FloodSmart officials, on average, 112,000 direct mail pieces were sent to Indian reservations each year, the majority of which were sent to addresses for properties in an SFHA. However, several tribal representatives with whom we spoke still told us that they would like more information about NFIP and its requirements so that they could decide whether to participate in the program and encourage their members to purchase policies through the program. In addition, representatives from a few tribes and from an insurance company told us that marketing campaigns or other outreach efforts may have little effect in Indian communities without the buy-in of tribal leaders. HUD and USDA Rural Development provide assistance in the form of housing and infrastructure grants, loans, and loan guarantees to Indian tribes. According to officials from both agencies, while neither HUD nor USDA is required to provide NFIP outreach, the agencies worked with tribes on housing issues that may include determining flood risk to housing assistance projects and assessing housing-related issues after disasters, such as floods. In addition, the officials said that their field staff had FEMA FloodSmart program material on hand for interested parties, including Indian tribes. As part of the IHBG and ICDBG programs, HUD officials in the Office of Native American Programs told us that they worked with tribes to identify their priorities and to help them determine how to best use HUD funds as an investment in addressing their needs. In addition, under both IHBG and ICDBG, recipients can use program funds to cover flood insurance premiums for properties in some high-risk areas. HUD officials said that they conducted outreach to housing authorities in locations where floods and other disasters had occurred to assess the status of HUD housing stock and to identify displaced families. HUD officials also said that they coordinated with FEMA and other agencies locally so that the agencies could work together to assess grantees’ damages and needs following a disaster, but that they did not know which tribes participated in NFIP. HUD regional officials said that they worked closely with individual tribes that had been impacted by flooding and other disasters and could offer technical and financial assistance to the tribes. Officials from three tribes told us that in their experience, HUD officials generally did not approve of using HUD assistance to build in an SFHA. An official from one of these tribes told us that to address HUD’s requirements for one of its HUD- assisted housing developments, the tribe built pads on all of the houses it was constructing to elevate them out of the flood zone. In addition, officials in HUD’s Office of Environment and Energy told us that they provided environmental training and invited grantees, including tribes, to attend this training. USDA Rural Development has a Native American Coordinator who interacts with tribes on programmatic issues, including challenges and issues that arise due to flooding. We spoke with the national coordinator, who explained that each state office serving a federally or state recognized Indian tribe designated an individual to serve as the USDA Rural Development state Native American Coordinator. He explained that the role had typically been a collateral duty although, in the past, three states had employed full-time Native American Coordinators. The coordinator’s efforts related to NFIP primarily would be liaising between tribal staff and the appropriate Rural Development staff tasked with ensuring compliance with program requirements for USDA-funded construction or development. Specifically, USDA officials told us that they ensured compliance with NFIP requirements only for USDA-funded projects where construction or development occurred in an SFHA in an NFIP participating community. They explained that USDA’s evaluation of NFIP applicability was part of National Environmental Policy Act In these cases, USDA officials said they ensured that reviews.mitigation actions were taken and when this was impractical, the applicant for USDA funding purchased flood insurance. The officials told us they did not provide any assistance for developing floodplain management approaches, but required that borrowers comply with applicable state or local floodplain ordinances or permits. Both HUD and USDA officials told us that when an area had not been mapped, they might rely on tribal elders or another knowledgeable source in determining the location of flood-prone areas. Representatives from HUD and USDA, tribal representatives, and private insurers all agreed that more could be done to encourage tribes to participate in flood insurance programs. However, FEMA noted that it was limited in its efforts by the unique legal issues surrounding Indian tribes and their lands. FEMA also told us its focus was on mapping more highly populated areas, which typically did not include Indian tribal communities. HUD and USDA acknowledged that more information and support could encourage tribes to participate in NFIP. For instance, HUD officials in the Northern Plains region, where more than 30 federally recognized tribes are located, told us that there was likely a need for more education among tribes. The officials said that they were aware that FEMA had to prioritize limited federal dollars for flood mapping activities and that tribal lands might not be a top priority. However, they noted that tribes also might not be proactive in requesting flood maps for their communities because they had received conflicting information about FEMA’s authority to map tribal lands. The officials said they had invited FEMA to a regional meeting in the last year to share information on disaster topics, including floods and flood insurance, with tribal housing officials from the region. As previously noted, officials from HUD’s Office of Environment and Energy also told us that they were increasing opportunities for HUD grantees, including tribes, to obtain information and training on environmental topics such as flood risks. Tribal representatives had a number of suggestions that could lead to increased tribal participation, ranging from expanding FEMA outreach and education to requiring tribes to have flood insurance. They suggested, among other things, more emphasis on educating tribes on the importance of flood insurance, as at least one tribe had been experiencing more rain each year; more outreach to tribes, including those without flood hazard maps, to help them understand their vulnerability to floods and the advantages of NFIP participation; meetings that brought together FEMA officials and elected tribal councils that could make decisions on behalf of their tribes; federal grants to help tribes develop elevation certificates and to retrofit older properties to lower risk and make the policies more affordable for members; a simulation exercise that included the host tribe, FEMA officials, and other government officials with whom tribes would need to coordinate in a flood-related disaster; and an amendment to the NFIP statute to address issues specific to tribes’ limited ability to adopt and enforce land use ordinances. At least one tribal representative said that it would be reasonable for Congress to require a flood mitigation plan across communities and tribal lands, regardless of risk level, and that tribes with critical infrastructure in a flood-prone area should be required to participate in NFIP or sign a waiver of future flood assistance. However, another representative suggested that FEMA should address the land use ordinance issue and determine, with the tribes’ input, whether NFIP had been financially beneficial to tribal members who were able to purchase flood insurance in the nontribal community where they lived. Insurance company officials we spoke with, including a Write-Your-Own company and a broker, emphasized the importance of respecting the cultural issues of dealing with tribes in any targeted outreach activities. As we have seen, for example, the FloodSmart campaign sent thousands of pamphlets to individual residents on Indian lands. One insurance broker that targets Indian tribes told us that the company had received three requests for policies after the mailing. She told us that the company had learned that tribal members tend to rely on the views of their tribal leaders for guidance and that without the buy-in of these leaders, marketing FloodSmart materials to individual members would likely not be a successful strategy. Insurance and reinsurance company officials we spoke with were aware of the Biggert-Waters Flood Insurance Reform Act of 2012 requiring FEMA and GAO to assess options and strategies for privatizing NFIP in the future and authorizing FEMA to pursue private management initiatives.examine whether it would be in its interest to become more involved now, given that future legislation would likely increase the private sector’s role in flood insurance. This official added that his company recognized that NFIP was not actuarially sound and that expected additions to the nearly A reinsurance official said that his company had begun to $18 billion deficit from Superstorm Sandy could accelerate congressional interest in greater private sector involvement in providing flood insurance. Other private sector insurance options may offer tribes an alternative to NFIP. A private insurer proposed two related options that could allow tribes to purchase flood insurance at a potentially lower cost than under NFIP. The first would involve expanding the existing eligible flood insurance risk-sharing pools to obtain the critical mass of policies necessary to make low-cost flood insurance policies affordable to Indian households. The second would establish a new private “microinsurance” program offering low-premium policies with small coverage limits tailored specifically to Indian tribes, based on similar operations in developing countries. Nonprofit Insurance Risk Pool: Two of the tribes we contacted had purchased flood insurance through an insurance risk pool offered by AMERIND Risk Management Corporation. AMERIND was organized in 1986 as a collaborative program between HUD and some Indian housing authorities to provide insurance protection for Native American low-income housing. It currently operates as a multitribal nonprofit corporation working with over 400 tribes and administers risk-sharing pools. Since 2002, AMERIND has offered a flood insurance endorsement to its standard policy, limited to HUD-assisted Indian housing. Company officials explained that through a members-only risk pool, the flood endorsement provides flood coverage to about 56,000 structures on tribal lands and charges a universal rate of $10 per structure per year (see table 3). AMERIND has a coverage limit of $15,000 for each covered structure. An Arizona tribe we contacted uses AMERIND insurance on its HUD- assisted tribal housing, and the policies included the flood protection endorsement. According to the tribal representative, the tribe has used this insurance option for about 8 years. One Oklahoma tribe we contacted had not purchased AMERIND’s flood insurance endorsement but had purchased property insurance from the company. This representative said that because of its affordability compared with NFIP and because the company is a multitribal corporation, he would refer individual members to AMERIND if they lived in HUD IHBG-assisted housing within a floodplain and needed to purchase flood insurance. USDA Rural Development also has approved AMERIND as an eligible nonflood insurer for its single- family housing programs, so Indian tribes and their members can use an appropriate AMERIND product to insure projects financed through these programs. According to USDA, the intent is to facilitate use of its programs by Indian tribes and their members for projects on trust land when conventional insurance coverage is unavailable, difficult to access, or expensive. However, the current nonprofit insurance risk-pooling option has limitations. First, while the premium rate may be lower than NFIP’s, the coverage limits for flood insurance are also generally lower. Representatives for AMERIND told us that coverage limits were low because the company had not been successful in obtaining reinsurance on the private market that would allow the company to mitigate its risk and offer full replacement costs for each structure. AMERIND does offer to provide double coverage ($30,000 per structure) for flood losses, but the premiums are more than 10 times the universal rate, ranging from about $150 to $200 per structure. These premiums are generally below NFIP rates for properties with similar coverage inside SFHAs and are generally comparable to NFIP rates for similar coverage outside SFHAs. Second, because AMERIND’s flood insurance coverage is available for HUD IHBG- assisted structures only, tribes cannot use it for all structures on tribal lands. The tribes we interviewed that used AMERIND for flood insurance were able to obtain coverage only for HUD IHBG-assisted structures. Because of this limitation, AMERIND does not have the critical mass of policies necessary to offer private low-cost flood insurance to all Indian households. Third, communities may still face consequences if they are identified as having SFHAs and choose to obtain non-NFIP flood insurance. According to FEMA, NFIP aims to do more than simply encourage property owners to purchase flood insurance. It also encourages them to take measures to mitigate potential flood damage to their properties, and NFIP coverage is available only when certain flood protection standards have been implemented. Private Microinsurance: Another potential private sector option would be a private microinsurance program. Microinsurance is a relatively new product that allows insurers to offer low-premium policies with small coverage limits in developing areas. The concept operates much like AMERIND’s risk pool and is structured to provide low-income policyholders with a degree of “livelihood protection” or emergency expense support rather than full indemnity for loss. We spoke with officials from a reinsurance company that was recently awarded a grant from a member of the World Bank group to develop a market for microinsurance in an agriculture-based developing country. These officials and officials from the reinsurance company’s insurance subsidiary said that, given the recent congressional interest in looking into private options for flood insurance, they would be interested in working with AMERIND or helping to develop another Native American flood insurance program that would cover all tribal member homeowners and businesses. They said that a mandatory coverage provision would solve the coverage problem and reduce uncertainty, making the provision of microinsurance more attractive to and sustainable for insurers and reinsurers. But they added that even with mandatory group coverage, they saw benefits to mapping all lands, because a private microinsurance risk pool would need to charge more for properties that were located on unmapped tribal lands than for similar properties located on mapped tribal lands. During the course of our work for this report, FEMA developed a draft statement of work for its upcoming private market assessment of NFIP. The draft statement, dated December 17, 2012, included a requirement for the contractor to assess a broad array of instruments, including reinsurance, microinsurance, and flood insurance pools. FEMA officials confirmed that the inclusion of microinsurance and insurance pools in the study was finalized after we raised and discussed those alternatives with them and that they considered both alternatives to be worth studying. FEMA officials told us that they planned to issue the statement of work early in January 2013. Congress created NFIP with the intent of providing affordable flood insurance to communities and households in order to financially protect property owners and reduce the cost of federal postdisaster assistance, but participation by Indian tribes has been low. Even on Indian lands that have experienced flooding, tribes and tribal members often do not participate, and the total number of policies written to tribes and tribal members accounts for less than 1 hundredth of a percent of FEMA’s portfolio. FEMA has provided tribes with training and technical assistance and has to some extent helped tribes to understand the flooding risks they face. However, several factors have contributed to the low participation rate, including limited mapping on Indian lands; affordability; lack of information on NFIP; and tribal land use issues, including confusion about legal restrictions on activities on Indian lands. Limited mapping, in particular, has contributed to a lack of awareness both of the risk of flooding and of the benefits of NFIP. FEMA has generally focused its mapping efforts on densely populated and coastal areas in order to make the best use of its resources. However, increased mapping of less densely populated rural areas, including Indian lands, is in line with Congress’s focus on increasing tribes’ participation in NFIP and is key to raising awareness of the types of flood risks residents of these areas face. Expanding its flood mapping efforts will challenge FEMA to balance its need to make the best use of scarce resources with the needs of these previously underserved communities. To help increase Indian tribes’ participation in NFIP, we recommend that the Administrator of FEMA examine the feasibility of making mapping of tribal lands a higher priority. We provided a draft of this report for review and comment to USDA Rural Development, FEMA within the Department of Homeland Security, HUD, and BIA within the Department of the Interior. A letter from the Director of the Departmental GAO-OIG Liaison Office within the Department of Homeland Security stated that FEMA will take steps to make mapping of tribal lands a higher priority. The director also stated that doing so will be challenging due to FEMA’s scarce resources and noted the agency’s appreciation for GAO’s acknowledgment of its resource limitations. In addition, the director said that FEMA will consider the suggestions made by tribal representatives for increasing tribal participation in flood insurance programs. The letter is reprinted in appendix II. We also received technical comments from USDA Rural Development, FEMA, and HUD, which we incorporated in the report as appropriate. BIA did not provide any comments. We are sending copies of this report to appropriate congressional committees, and the Secretaries of Agriculture, Homeland Security, Housing and Urban Development, and Interior. This report will also be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Affairs and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objectives were to examine (1) factors contributing to the current low levels of National Flood Insurance Program (NFIP) participation by Indian tribes, (2) the Federal Emergency Management Agency’s (FEMA) efforts to increase tribes’ participation in NFIP, and (3) administrative and legislative actions that could encourage Indian tribes and their members to increase their participation in NFIP and potentially other flood insurance programs. For the purposes of this review, we limited our study primarily to flood insurance policies held on Indian tribal lands (such as reservations), because FEMA does not collect demographic data such as race or ethnic origin of NFIP policyholders. Therefore, no comprehensive data are available on members of Indian tribes who are living in nontribal communities and may carry individual NFIP policies. We also interviewed representatives from Alaska, because many Alaska Native communities are vulnerable to floods, but do not have designated reservations that could participate in NFIP. With only one reservation in the state, tribes in Alaska can participate in NFIP only through the municipalities in which their communities are located. To address all three objectives, we reviewed NFIP laws and policy documents. We reviewed FEMA data on communities participating in NFIP, including those designated as tribal communities, and on tribes that had flood hazard maps but were not participating in NFIP for various reasons. FEMA provided us with information on its process for collecting and analyzing the data in its Community Information System database and on the agency’s data reliability measures. We determined that the data FEMA provided to us were sufficiently reliable for our reporting purposes. In addition, we reviewed prior GAO work on flood insurance, Indian tribes, and disaster preparedness, and reports by the Congressional Research Service, to compile background information on NFIP. We interviewed and gathered documentation from officials at FEMA and other federal agencies with programs that assist Indian tribes, such as the Bureau of Indian Affairs (BIA) within the Department of the Interior, Department of Housing and Urban Development (HUD), and the U.S. Department of Agriculture (USDA) Rural Development. In addition, we reviewed regulations for those agencies’ programs. We spoke with representatives from the State of Alaska; the National Flood Determination Association, which provides flood mapping data to mortgage lenders and insurers; the insurance and reinsurance industries; a nonprofit risk-pooling organization; FloodSmart, which administered FEMA’s NFIP media campaign; and selected Indian tribes. We selected the tribes from among those on FEMA’s lists of tribes that were participating in NFIP and those that had flood hazard maps but were not participating in NFIP. We selected a purposive non-representative sample of eight participating tribes for interviews based on the number of individual policies within each tribe, geographic diversity, and tribe size. We also selected a purposive non-representative sample of six nonparticipating tribes for interviews based on the reason for nonparticipation, geographic diversity, and tribe size. In addition, our prior work on the Native American Housing Assistance and Self-Determination Act of 1996 (NAHASDA) program partially informed the tribes we selected. Specifically, because we had already established communication with certain tribes, we selected them over comparable tribes on either list. Because of our time constraints and several selected tribes being impacted by a natural disaster (Superstorm Sandy), we were able to interview five participating and five nonparticipating tribes. We also judgmentally selected a tribe in Alaska that we had contacted previously to obtain a perspective on NFIP from a tribe in that state. To obtain perspectives from the insurance industry, we interviewed representatives from an insurance company that we contacted while conducting prior work related to the Write-Your-Own insurance program, a vendor that administers NFIP flood policies for Write-Your-Own insurance companies, and an insurance broker that specializes in working with Indian tribes. To determine factors contributing to the current low levels of NFIP participation by Indian tribes, we reviewed FEMA’s data on tribal community participation. We also asked FEMA officials about the agency’s process for mapping Indian lands (or providing flood hazard maps to Indian communities), and options for tribes to participate in NFIP. In our interviews with other federal officials, tribal representatives, and others, we asked about factors that may positively or negatively affect whether tribes participate in NFIP. To determine the efforts FEMA was making to increase awareness of and encourage participation in NFIP by Indian tribes, we asked officials at FEMA about outreach and technical assistance they provided to Indian tribes related to floods and flood insurance. In addition, we asked FloodSmart representatives about any efforts to market NFIP to Indian tribes and their members. We also asked officials at BIA, HUD, and USDA about information they may share with tribes on flood insurance in providing program support. Further, we asked tribal representatives about their interactions with FEMA and other agencies and information or assistance they had received related to NFIP. To determine what administrative or legislative actions could encourage Indian tribes and their members to increase their participation in NFIP and potentially other flood insurance programs, we reviewed NFIP regulations and legislation and guidance on FEMA coordination with and outreach to Indian tribes. We interviewed representatives from the insurance and reinsurance industries about other flood insurance options, in addition to NFIP, that may facilitate tribes’ purchase of flood insurance. We also asked federal officials and tribal representatives about actions that FEMA or Congress could take to encourage tribes participating in NFIP to increase their use of the program and tribes not participating to join NFIP. Participating: Based on number of individual NFIP policies held by the tribe, size of tribe, and geographic diversity, a total of eight tribes were selected for inclusion as well as six backup tribes. Tribes and backup tribes were selected within each region, except in the Northeast region where there was only one tribe included in our sample frame. One group of selected tribes had the largest number of policies and the backups had the second largest number of policies. The other group of tribes had the lowest number of policies. Among the cohort of lowest policies, within each region, tribes selected were those with the largest enrollment and the backups were those with the second largest enrollment. Nonparticipating: Based on reason for nonparticipation (such as withdrawn or suspended from the program), size of tribe, and geographic diversity, a total of six tribes were selected for inclusion as well as three back-up tribes. Tribes and backup tribes were selected within each region, except in the Northeast region where there was only one tribe included in our sample frame. Additionally, the one withdrawn tribe and the one suspended tribe are also included in our selected tribes. In the other three regions, one group of selected tribes had the largest number tribal enrollment and the backups had the second largest tribal enrollment. We conducted this performance audit from August 2012 to January 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Andy Finkel (Assistant Director), Bernice Benta-Jackson, Emily Chalmers, Brian Friedman, Jeffery Malcolm, Marc Molino, Patricia Moye, Roberto Pinero, and Andrew Stavisky made key contributions to this report.
Indian tribes' participation in NFIP is extremely low, even though some Indian lands are at high risk of flooding. In response to a Moving Ahead for Progress in the 21st Century Act mandate, GAO examined (1) factors affecting Indian tribes' participation in NFIP, (2) FEMA's efforts to increase tribes' participation in NFIP, and (3) administrative and legislative actions that could increase tribes' participation. GAO reviewed FEMA data on community participation in NFIP and prior GAO reports on flood insurance and Indian tribes, interviewed officials from selected Indian tribes and insurance companies, and collected information from relevant agencies and industry officials. As of August 2012, just 37 of 566 federally recognized tribes (7 percent) were participating in the National Flood Insurance Program (NFIP), and 3 tribes accounted for more than 70 percent of policies. A number of factors have affected tribes' participation. First, the Federal Emergency Management Agency (FEMA) has not placed a high priority on mapping rural areas, including many Indian lands, for flood risk, and most tribal lands remain unmapped. Without flood hazard maps, tribal communities may be unaware of their flood risk, even in high-risk areas. Partly for this reason, the risk of flooding is perceived as relatively low on many tribal lands. Further, tribes may lack the resources and administrative capacity needed to administer NFIP requirements, and NFIP premiums are often too high for low-income tribal members. Finally, unique tribal issues can make participation difficult. For example, some Indian tribes do not have reservations over which they can enact and enforce the land use ordinances that are required for NFIP participation. Instead, many have lands that were allotted to individuals rather than to a tribal entity, limiting the tribes' jurisdiction. FEMA has done some outreach to tribes, largely through emergency management and homeland security training for tribal officials, technical assistance to tribes that are preparing their multihazard mitigation plans, and marketing through the NFIP FloodSmart campaign. FEMA officials told us that the courses offered through its Emergency Management Institute helped to educate tribal officials about NFIP and floodplain management and that its curricula included courses for floodplain managers on their roles and responsibilities, flood insurance, and NFIP rules and regulations. One tribal representative told us that he was participating in an ongoing curriculum and several tribes had developed multihazard mitigation plans. Finally, both the Department of Housing and Urban Development and the U.S. Department of Agriculture, Rural Development may provide NFIP information to Indian tribes as they provide assistance in the form of housing and infrastructure grants, loans, and loan guarantees. Tribal representatives suggested steps that FEMA could take to encourage participation in NFIP--for example, placing a higher priority on mapping Indian lands and increasing FloodSmart marketing to tribal leaders rather than individuals. Given ongoing congressional interest in private sector alternatives to NFIP, GAO also explored whether private alternatives exist that could offer affordable coverage to low-income tribal members--for example, by expanding access to risk-pooling programs that could help insure more tribal households. One such program already insures thousands of Indian properties. Another relatively new product, microinsurance, would involve insurers offering less expensive policies with relatively low coverage limits but coverage for all tribes. FEMA said that its NFIP privatization study mandated by the Biggert-Waters Flood Insurance Reform Act of 2012 would include an assessment of these alternatives. GAO recommends that the FEMA Administrator examine ways to make mapping of tribal lands in flood-prone areas a higher priority. FEMA agreed with our recommendation.
The FFEL and DL programs have substantially different structures but both provide student loans to help students meet the costs of obtaining a postsecondary education. FFEL loans are provided by nonfederal lenders and repayment is guaranteed by the federal government. Under the DL program, the federal government provides loans to students and their families, using federal capital. Figure 1 shows the FFEL and DL program loan volume outstanding as of September 30, 2008 and 2009. In the FFEL program, student loans are made by nonfederal lenders, which can be for-profit or nonprofit entities. Lenders are protected against borrower defaults by federal government guarantees that are administered by guaranty agencies. Guaranty agencies are state or nonprofit entities that also perform other administrative and oversight functions under the FFEL program. For example, guaranty agencies provide counseling to borrowers regarding delinquent loan repayment and initiate collections on defaulted loans. Generally, lenders provide the FFEL loan proceeds to a student’s school, which then credits the student’s account and disburses the residual amount, if any, to the student. Schools, lenders, and guaranty agencies often employ third-party servicers to perform functions related to the administration of the FFEL program. For example, a lender may hire a servicer to process borrower payments. Table 1 details the number of FFEL participants. In the DL program, student loans are fully funded by the federal government, which provides the loan proceeds to the student’s school. The school then credits the student’s account and disburses any residual amount to the student. Schools sometimes contract with third-party servicers to assist in administering the operations of the DL program. In addition, Education contracts with a servicer (DL servicer) to administer certain aspects of the DL program, such as payment processing. The number of participants in the DL program is detailed in table 2. Under HCERA, no new FFEL loans may be made after June 30, 2010. Borrowers who may have been eligible to obtain new FFEL loans prior to the passage of HCERA could receive loans under the DL program. Accordingly, the number of DL borrowers is expected to increase with the expansion of the program. Education has awarded contracts to four additional DL servicers to begin servicing direct loans by August 31, 2010. Audits required under FFEL or DL are performed in accordance with guidance issued by the Office of Management and Budget (OMB) or the applicable Department of Education Office of Inspector General (OIG) audit guide. States, local government entities, and nonprofit entities are generally required to have their audits performed in accordance with OMB Circular No. A-133, Audits of States, Local Governments, and Nonprofit Institutions, although, if federal student assistance is the only federal program in which the entity participates, OMB Circular No. A-133 gives the entity the option of using the program-specific audit guide issued by the OIG in place of the guidance produced under the Circular. For-profit entities are required to have their audits performed in accordance with the applicable OIG audit guide. The FFEL and DL programs generally have different audit requirements stemming mainly from different program structures. The FFEL program relies on lenders, guaranty agencies, and other entities that are subject to statutory, regulatory, and contractual audit requirements. The DL program does not have as many of these audit requirements because DL loans are provided by the federal government, and fewer external entities are involved. The audit requirements set out under the FFEL and DL programs are similar with regard to schools and their servicers, which are participants in both programs. We noted that certain for-profit lender audit guidance was inconsistent with regulations. Finally, oversight procedures for the DL servicer were designed to assess the DL servicer’s performance in servicing loans in the program. Different oversight procedures are planned for four additional DL servicers expected to begin servicing direct loans by August 31, 2010. The FFEL and DL programs have different statutory and regulatory requirements for audits and program reviews, with more audit requirements in place for the FFEL program, which involves more participants external to the government. For instance, because the FFEL program relies on thousands of nonprofit and for-profit lenders, there are regulatory requirements for compliance audits and program reviews of those lenders. Such requirements do not apply to the DL program, which provides student loans through a single lender—the federal government. Similarly, required agreed-upon procedures engagements for the Ensuring Continued Access to Student Loans Act (ECASLA) and audits of 9.5% Special Allowance Payments are only applicable to lenders in the FFEL program. Figure 2 summarizes the audit and review requirements for the FFEL and DL programs, and appendix I includes more details about these activities. While our analysis showed audit requirements generally differed, schools under both programs had similar requirements to have annual financial statement audits performed by independent public accountants (IPA). School financial statement audits focus on whether the financial statements are fairly presented in accordance with generally accepted accounting principles. These financial statement audits are to be performed in accordance with GAGAS. GAGAS also requires IPAs to report on the results of certain tests performed on internal controls over financial reporting and compliance with certain provisions of laws, regulations, and program requirements. Financial statement audit reports provide Education with information about the financial condition of participants, any significant internal control deficiencies, and instances of noncompliance. Third-party servicers employed by schools to aid in the administration of their federal loans are not generally required to have financial statement audits under either the FFEL or DL programs. Both programs also require schools and school servicers to have annual compliance audits performed by IPAs. The audits focus on whether these participants comply with applicable statutes, regulations, and program requirements. For example, school compliance audits for both programs are designed to test whether schools perform student eligibility validation. These audits are to determine whether a school has verified that certain student requirements, such as citizenship and financial need, have been met. In addition, schools participating in the student loan programs are required to follow specified criteria for applying loan proceeds to students’ accounts and disbursing residual amounts to students within established time frames. To illustrate, for students borrowing from the FFEL or DL programs, schools should not credit a registered student’s account more than 10 days before the first day of classes. For both programs, compliance with these requirements is monitored through the annual compliance audit. If performed properly, the required audits for FFEL and DL participants should address federal and borrower interests. Audits address federal fiscal interests if they are designed to help protect the government from financial loss and address borrower interests if they are designed to help ensure that qualified individuals (1) have access to federal student loans and (2) are protected from financial loss. For instance, auditors assess whether schools that participate in either program complied with refund requirements. Refund requirements for both programs include the proper return of program funds in the case of unearned tuition and other charges for a student who received federal student aid if the student did not register, dropped out, was expelled, or otherwise failed to complete the period of enrollment. Proper refunds to the lender or federal government reduce the outstanding loan amount, thus protecting federal and borrower interests. As noted previously, HCERA terminated the authority to make new FFEL loans after June 30, 2010. However, FFEL loans outstanding after that date will continue under the same structure with Federal Student Aid oversight for many years, depending on the repayment plan. Accordingly, we identified and reviewed audit objectives and related guidance and found one area where the guidance for compliance audits of for-profit and nonprofit FFEL lenders differed. FFEL lenders can be for-profit or nonprofit and, in some cases, can be the schools themselves. For-profit lenders are required to have their audits performed in accordance with the OIG Lender Audit Guide. Nonprofit lenders are generally required to have their audits performed in accordance with OMB Circular No. A-133, although the Single Audit Act and OMB Circular No. A-133 allow lenders to elect to have their audits performed using the OIG audit guide if federal student assistance is the only federal program in which the lender participates. “School lenders proceeds: Determine whether schools that made FFEL loans use borrower interest payments, Education special allowance payments, interest subsidies, and any proceeds from the sale of loans to supplement needs-based grants for its students, as required.” This audit objective is designed to assess whether school lenders appropriately comply with regulations affecting significant amounts of proceeds from loans. The OIG Lender Audit Guide has been supplemented with several amendments for specific changes to audit requirements, but has not been comprehensively updated since December 1996 and, as amended, did not address this audit objective. OIG officials told us they plan to update the OIG Lender Audit Guide to appropriately address this omission. The functions performed by the DL servicer are similar to certain functions performed by lenders, guaranty agencies, and their servicers in the FFEL program. The DL servicer is not required to have an independent auditor perform financial and compliance audits similar to those required of guaranty agencies, guaranty agency servicers, and lender servicers in the FFEL program. Instead, Federal Student Aid directly oversees the DL servicer’s performance as a federal contractor through monthly reviews of performance metrics as well as other procedures, including monthly reconciliations of loan balances recorded by the DL servicer to those in Federal Student Aid records. Federal Student Aid officials are to review reports generated by the Independent Quality Control Unit, a component of the DL servicer that performs analysis to help ensure that the DL servicer’s performance metrics are correctly calculated and accurately reported and that corrective actions from prior audits are implemented. These oversight procedures are designed to assess and evaluate the DL servicer’s performance in servicing loans in the DL program. Our analysis showed that the objectives of the oversight procedures to be performed by Federal Student Aid over the DL servicer share some similarities with the objectives being addressed by audits of FFEL lenders, guaranty agencies, and their servicers. For example, both FFEL lenders and the DL servicer are to update student records to reflect changes in a student’s status—such as student enrollment, which affects the repayment of the loan. For FFEL lenders, the performance of this function is to be evaluated in the annual compliance audit of lenders performed by IPAs. For the DL servicer, this function is to be evaluated through the oversight procedures performed by Federal Student Aid staff, including monthly reviews of performance metrics that monitor the DL servicer’s performance. For example, Federal Student Aid is to monitor whether the DL servicer meets the 2-day standard for completing student status updates and the 98 percent standard for status update accuracy. Other examples of similar functions monitored by compliance audits in the FFEL program and by oversight procedures in the DL program include timely and accurate application of loan payments to borrower accounts and timely review and processing of loan discharge claims. In 2009, Federal Student Aid awarded contracts to four additional servicers to address increased direct loan volume stemming from changed student loan market conditions and potential further volume increases. HCERA, passed in March of 2010, terminated the authority to make new FFEL loans after June 30, 2010, which, according to Federal Student Aid officials, will add substantially to Federal Student Aid’s direct loan volume and DL servicing needs. The new servicers, expected to begin servicing direct loans by August 31, 2010, are subject to oversight procedures that differ from the current DL servicer. According to Federal Student Aid’s contract monitoring plan, these activities will include transaction analysis and reconciliations as well as internal control and program compliance reviews. For example, according to the contract monitoring plan, Federal Student Aid staff are expected to perform periodic transaction analysis at the borrower account level to determine the servicing accuracy of transactions. Federal Student Aid officials and DL servicer are to discuss issues identified through transaction analysis and the status of corrective actions at weekly operational meetings. In addition, the monitoring plan states that program compliance reviews are to be conducted as needed, at least annually, to determine if servicing is in compliance with requirements. According to Federal Student Aid officials, guidance for some of these oversight procedures is under development. The contract monitoring plan also calls for the additional DL servicers to be subject to internal control examinations performed by IPAs in accordance with Statement on Auditing Standards No. 70. Each additional DL servicer is to provide Federal Student Aid with an IPA report on the examination of its operational controls semiannually and on the examination of its information technology controls annually. These examinations are in addition to Education’s annual review of internal controls required by OMB Circular No. A-123. In addition, the contracts call for the additional DL servicers to be subject to performance measures focused on default prevention and surveys of borrower satisfaction, school satisfaction, and Federal Student Aid staff satisfaction with servicer performance. These performance measures are to be used to compare the additional DL servicers’ relative performance as one factor in determining the allocation of direct loans to them for servicing. Education officials expect to have these oversight procedures in place by the time the additional DL servicers begin servicing direct loans. FFEL and DL participants submit required audits to Federal Student Aid. Components of Federal Student Aid’s Program Compliance office, including the School Eligibility Channel and Financial Partner Eligibility and Oversight (Financial Partners) are responsible for providing oversight by ensuring that the audits performed comply with statutory and regulatory requirements. The School Eligibility Channel is responsible for providing oversight of audits of schools and school servicers that participate in the FFEL and DL programs. Financial Partners is responsible for the oversight of audits of lenders, guaranty agencies, and their servicers participating in the FFEL program. These activities are to be accomplished through audit resolution and program review processes. Figure 3 depicts the respective oversight responsibilities of the School Eligibility Channel and Financial Partners. The School Eligibility Channel and Financial Partners are responsible for logging receipt of the audit report, performing an acceptability review, and taking steps to resolve the audit. According to policies and procedures, Federal Student Aid staff track findings contained in audit reports and use them to oversee the programs by monitoring whether corrective actions are taken. Tracking systems used by the School Eligibility Channel and Financial Partners include the Postsecondary Education Participants System (PEPS), eZ-Audit, and various Excel-based tracking sheets. Figure 4 depicts the process used by Federal Student Aid components for reviewing audit reports. Some processes described in figure 4 are designed differently depending on the type of participant. Specifically, according to Federal Student Aid policies and procedures, schools are required to submit audit reports— both financial statement and compliance audits—to Federal Student Aid electronically via the eZ-Audit system. Other participants, including lenders, guaranty agencies, and their servicers, are expected to submit reports in paper or electronic form. For audits performed in accordance with OMB Circular No. A-133, Federal Student Aid staff are to ob tain the audit reports from the Federal Audit Clearinghouse Web site, a governmentwide audit information repository. Federal Student Aid staff are to perform acceptability reviews on the audit reports using checklist that address issues such as whether all required reporting elements are included. The School Eligibility Channel uses contractors to assist with the acceptability review of school audit reports. After the acceptability review is completed, Federal Student Aid policies and procedures require staff to review the submitted audit report and notify the participant that th e audit has been accepted or explain steps required for satisfactory audit resolution. Statutes and regulations provide authority for Federal Studen Aid to perform a program review as a method of program oversight of participants. Regulations also authorize Federal Student Aid to in administrative hearings that can lead to sanctions, including the suspension of the participant from the program. Federal Student Aid staff are to enter resolution information into eZ-Audit or PEPS once an audit is resolved. Similar processes are to be used for biennial p rogram reviews of schools and lenders performed by guaranty agencies. Special allowance payment audits and ECASLA agreed-upon procedure reports, also required from participating lenders, are subject to similar report review procedures. Federal Student Aid procedures called for using acceptability review checklists and Excel-based tracking sheets designed specifically for these kinds of reports to ensure completeness of the reports and to track the status and ensure the resolution of reported findings. For these reports, findings resolution could include adjusting special allowance payments made to lenders or coordinating with the lender to remove ineligible loans from an ECASLA portfolio. Financial Partners has acknowledged that inefficiencies exist with the current tracking system. For example, Financial Partners staff must manually enter the receipt of the compliance audit reports in Excel-based tracking sheets, while the receipt of the school audit reports are automatically logged through eZ-Audit electronic submission. Further, PEPS does not allow Financial Partners to readily identify those lenders required to submit annual compliance audits. Accordingly, Financial Partners staff must analyze database information to identify these lenders. Further, because PEPS does not track all audit information that is important to Financial Partners, staff supplement their use of PEPS with Excel-based tracking sheets. To address these inefficiencies, Education is in the process of designing a new system—referred to as Integrated Partner Management (IPM)—that will replace the existing systems and, among other things, provide the capability to track audit findings. According to Education officials, IPM is currently in the requirements phase, which is expected to be completed in July 2010, with implementation in phases in 2012. We noted a gap in Education’s policies and procedures regarding review of audited financial statements for lender servicers. Education regulations require lender servicers that participate in the FFEL program to submit audited financial statements to Education annually. However, our review found that lender servicers did not submit their audited financial statements to Education. Federal Student Aid did not have procedures in place to review these financial statement audit reports and therefore did not conduct any follow-up to ensure that the audit reports were received and reviewed. Federal Student Aid officials told us they consider the risk to the government of not receiving these servicers’ audited financial statements to be low because lenders are ultimately responsible for the loans and have the responsibility to ensure that their servicers are financially capable. By not requiring the review of the audited financial statements of lender servicers, Federal Student Aid runs the risk of missing significant findings disclosed in these reports. Such findings could relate to control weaknesses over information security and financial reporting that may not be addressed in the annual compliance audits that Federal Student Aid staff review. Further, Federal Student Aid staff might not be informed if a lender servicer received other than an unqualified audit opinion. Concerns such as these might indicate potential problems regarding the servicer’s ability to continue program operations effectively. In addition, because one servicer may service multiple lenders, the risk to the government and borrowers increases should one of these servicers be in violation of any provision of federal regulations. According to GAO’s Internal Control Management and Evaluation Tool , agencies should obtain and report to managers any relevant external information that may affect the achievement of its missions, goals, and objectives. Unless Federal Student Aid receives and reviews these financial statement audit reports, it may not be fully aware of risks to the government and borrowers, and its ability to properly oversee the FFEL program could be impaired. Significant federal resources are committed to providing loans so that students’ educational goals can be achieved. Effectively overseeing the FFEL and DL programs is critical to minimize the risks to taxpayers and borrowers. Although no new FFEL loans will be made after June 30, 2010, FFEL loans unpaid at that time will remain under Federal Student Aid’s oversight for possibly 30 years. Improvements are needed in the audit guidance and review procedures for the FFEL program. The gaps we noted in the OIG Lender Audit Guide used to audit lenders and in Federal Student Aid’s policies and procedures regarding its review of audited financial statements for lender servicers expose the program to unnecessary risk. As Education moves forward to administer the expanded DL program, maintaining and enhancing its oversight procedures will help ensure that federal and borrower interests continue to be protected. To help address any gaps in the guidance for audits FFEL lenders perform in accordance with the OIG Lender Audit Guide, we recommend that the Education Inspector General update the OIG Lender Audit Guide to include all appropriate regulatory audit requirements. To ensure that Education properly oversees the ongoing servicing of outstanding FFEL student loans and mitigates risks related to lender servicers, we recommend that the Secretary of Education direct the Chief Operating Officer of the Office of Federal Student Aid to develop and implement policies and procedures requiring Federal Student Aid review of audited financial statements for lender servicers. In written comments on a draft of this report, the Education Office of Inspector General and Federal Student Aid agreed with our recommendations. These comments are reprinted in their entirety in appendixes III and IV, respectively. Regarding our recommendation to update the OIG Lender Audit Guide, the Education Inspector General concurred that the guide needs to be made current with all compliance requirements and anticipates updating and issuing a revised guide by December 2010. Regarding our recommendation to develop and implement policies and procedures requiring the review of lender servicer audited financial statements, the Chief Operating Officer of Federal Student Aid acknowledged the need to update the OIG Lender Audit Guide and existing processes and procedures to require lender servicers to prepare and submit audited financial statements, and stated that Federal Student Aid will review the audited financial statements. Education also provided technical comments, which we incorporated in this report, as appropriate. We are sending copies of this report to the Secretary of Education, the Inspector General of Education, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me on (202) 512-9095 if you or your staff have any questions about this report. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix V. The following information is from GAO, Federal Student Loans: Audits and Reviews of the Federal Family Education Loan and Federal Direct Loan Programs, GAO-09-992R (Washington D.C.: Sept. 30, 2009), enclosure, p. 16. To address the first objective, we reviewed our September 30, 2009, report to determine the extent to which the audit and review requirements were applicable to both the Federal Family Education Loan and the William D. Ford Federal Direct Loan (DL) program participants in order to identify similarities and differences. We obtained and reviewed relevant audit guides to determine if the audit objectives addressed statutory and regulatory requirements to be met by the programs’ participants. For the DL program, we also interviewed knowledgeable officials regarding the Department of Education’s (Education) procedures to oversee the performance of the DL servicer, and we reviewed the relevant oversight procedures. For nonprofit and for-profit schools and lenders, we analyzed Office of Management and Budget (OMB) Circular No. A-133 and the Education Office of Inspector General (OIG) audit guides to determine if they addressed similar objectives. To assess whether the audits as designed addressed federal and borrower interests, respectively, we determined if the audits are designed to help protect the government from financial loss, and help ensure that qualified individuals have access to federal student loans and are protected from financial loss. For example, we determined if the audit guides focused on determining whether the students and lenders met eligibility requirements to participate in these programs. We interviewed officials from Education’s Office of Federal Student Aid (Federal Student Aid) and the OIG, including the Acting Director of Financial Partner Eligibility and Oversight (Financial Partners), the General Manager of the School Eligibility Channel, and the Deputy Assistant Inspector General for Audit, to obtain clarification and explanations for any discrepancies identified during our review of documentation. The scope of our audit did not include testing that the audit guides were used by the auditors as intended. In addition, our work did not include program reviews conducted by guaranty agencies and other Federal Student Aid reviews because (1) in some cases, these reviews had similar objectives to the audits that we did include in our study and (2) in other cases, the reviews were risk-based and addressed specific operating conditions, and therefore these objectives were unique to each review. To address the second objective, we focused on the design of the processes Education uses to oversee the programs and to ensure compliance with statutory and regulatory requirements for the timely submission of audit reports. We reviewed applicable statutes and regulations and Federal Student Aid policies and procedures, including process flow diagrams and audit acceptability checklists. To further our understanding of the design of Education’s processes for overseeing these programs and ensuring compliance, we observed systems demonstrations that included automated and Excel-based systems used to track receipt of audits and related findings. During these demonstrations, we observed actual steps taken by staff in order to review, and if necessary resolve, the audit. We obtained and reviewed supporting documentation referenced during these demonstrations, such as audit acceptability checklists and copies of Excel-based tracking sheets, used by staff to determine the sufficiency of the audit report’s content and to ensure the timeliness of audit submissions, respectively. We interviewed officials from Federal Student Aid and OIG, including the Acting Director of Financial Partners, the General Manager of the School Eligibility Channel, and the Deputy Assistant Inspector General for Audit, to obtain clarification and explanations for any discrepancies identified during our review of documentation and the demonstrations. We focused on describing the processes Education has designed to ensure that applicable requirements are being met. While the scope of our audit did not include testing the implementation of these processes including controls, as appropriate, we noted any design deficiencies. We requested comments on a draft of this report from Education. We received written comments from the Education Inspector General and the Chief Operating Officer of Federal Student Aid (reprinted in their entirety in appendixes III and IV, respectively). We conducted this performance audit at Federal Student Aid offices in Washington, D.C., from August 2009 to July 2010 in accordance with GAGAS. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, significant contributions to this report were made by Jack Warner (Assistant Director), Jennifer Dent, Chau Dinh, P. Barry Grinnell, and Danietta Williams. Francine DelVecchio and Jason Kirwan also made key contributions.
The Higher Education Opportunity Act of 2008, Pub. L. No. 110-315, mandated GAO to study the financial and compliance audits and reviews required or conducted for the Federal Family Education Loan (FFEL) program and the Federal Direct Student Loan (DL) program. The Department of Education's (Education) Office of Federal Student Aid is responsible for administering these programs. This report focuses on (1) identifying differences and similarities in audit requirements and oversight procedures for the FFEL and DL programs, including anticipated changes to selected oversight activities and (2) describing how the Office of Federal Student Aid's policies and procedures are designed to monitor audits and reviews. To do so, GAO interviewed Education and inspector general officials and reviewed numerous audit guides, agency procedures, checklists, and audit tracking systems. GAO identified differences and similarities in audit requirements and oversight procedures for the two programs. Differences include the following: (1) The FFEL and DL programs generally had different audit requirements stemming primarily from divergent program structures. The FFEL program relied on lenders, guaranty agencies--which administer federal government loan guarantees to lenders--and other entities that were subject to statutory and regulatory audit requirements. The DL program did not have as many audit requirements because DL loans are provided by the federal government, and fewer external entities are involved. (2) GAO found differences in audit requirements for nonprofit and for-profit lenders. Certain applicable audit objectives included in Office of Management and Budget (OMB) requirements for compliance audits of nonprofit lenders were not included in the Department of Education Office of Inspector General (OIG) Lender Audit Guide for compliance audits of for-profit lenders. As a result, audits of lenders performed in accordance with the OIG Lender Audit Guide were at risk of omitting compliance testing for a key audit objective. Similarities in audit requirements and oversight procedures include these: (1) Schools were subject to annual financial statement and compliance audits under both programs. (2) The functions performed by the DL servicer, with which Education contracts to administer certain functions of the DL program, were similar to functions performed by lenders, guaranty agencies, and their servicers in the FFEL program. GAO's analysis found that objectives addressed by FFEL participant compliance audits were similar to the objectives addressed through oversight procedures for the DL servicer, such as Education's review of the servicer's monthly performance metrics. The passage of the Health Care and Education Reconciliation Act of 2010 terminated the authority to make new FFEL loans after June 30, 2010. Borrowers who would have been eligible to obtain new FFEL loans could receive loans under the DL program. Regarding Office of Federal Student Aid's monitoring activities, staff were to use financial statement audits to oversee the financial condition of the schools and guaranty agencies that participate in the student loan programs. Compliance audits of schools, lenders, guaranty agencies, and their third-party servicers help Education ensure that these participants comply with applicable statutes, regulations, and program requirements. The Office of Federal Student Aid was required to track findings in these audit reports. GAO found that third-party servicers for lenders in the FFEL program did not submit their audited financial statements to Education as required. Education lacked a policy and specific procedures to ensure receipt and review of these audited financial statements. Without such reviews, the Office of Federal Student Aid might not be informed of a third-party servicer's unfavorable audit opinion or significant reported findings that could affect program operations. GAO recommends that the Education Inspector General update the OIG Lender Audit Guide to include all appropriate regulatory requirements for audits of ongoing FFEL participants. GAO also recommends that the Secretary of Education develop and implement policies and procedures requiring Office of Federal Student Aid review of audited financial statements for lender servicers. The Education Office of Inspector General and Education agreed with GAO's recommendations.
Modern agricultural biotechnology refers to various scientific techniques, most notably genetic engineering, used to modify plants, animals, or microorganisms by introducing in their genetic makeup genes for specific desired traits, including genes from unrelated species (see slide 1). For centuries people have crossbred related plant or animal species to develop useful new varieties or hybrids with desirable traits, such as better taste or increased productivity. Traditional crossbreeding, however, can be very time-consuming because it may require breeding several generations to obtain a desired trait and breed out numerous unwanted characteristics. Genetic engineering techniques allow faster development of new crop or livestock varieties, since the genes for a given trait can be readily introduced into a plant or animal species to produce a new variety incorporating that specific trait. Additionally, genetic engineering increases the range of traits available for developing new varieties by allowing genes from totally unrelated species to be incorporated into a particular plant or animal variety. To date, the principal biotechnology products marketed have been certain genetically engineered field crops (see slide 2). No genetically engineered animals have yet been approved, and only a modest number of plant products obtained from biotechnology have been marketed. However, for three key crops grown in the United States-−-corn, soybeans, and cotton—a large number of farmers have chosen to plant varieties derived from biotechnology. In 2000, biotech varieties accounted for about 25 percent of the corn, 54 percent of the soybeans, and 61 percent of the cotton planted in the United States. These crops are the source of various ingredients used extensively in many processed foods, such as corn syrup and soybean oil, and they are also major U.S. commodity exports. The United States accounts for about three-quarters of biotech crops planted globally. Other major producers of biotech crops are Argentina, which produces primarily biotech soybeans, and Canada, whose principal biotech crop is canola. Several U.S. government agencies are involved in trying to address foreign regulatory measures that affect biotech exports (see slide 3). Some of these government entities, including several agencies within the Department of Agriculture (USDA), the Food and Drug Administration (FDA), and the Environmental Protection Agency (EPA), play a role because of their regulatory expertise in plant and animal health, food safety, or environmental protection. Other agencies, such as the Office of the U.S. Trade Representative (USTR), USDA’s Foreign Agricultural Service, and the Department of State, are involved because of their responsibilities for trade, export facilitation, or diplomatic negotiations. Recent developments in countries that are major markets for U.S. agricultural exports and in various multilateral organizations raise concerns about the prospects for U.S. agricultural biotech exports. For example, no agricultural biotech products have been approved in the EU since 1998. In addition, several countries have already passed or are considering regulations mandating labels for foods obtained from biotechnology. Furthermore, in the EU there is an effort to establish regulations requiring documentation to trace the presence of biotech products through each step of the grain handling and food production processes. International organizations, such as Codex, are also developing guidelines or rules affecting agricultural biotech trade (see slide 4). Some countries have not approved for marketing certain biotech products that have been approved in the United States (see slide 5). Given the novelty of agricultural biotech products, harmonized regulatory oversight by major trading countries is still a work in progress. Indeed, many countries have no approval process for these products at all. Codex is currently developing international guidelines for analyzing the risks of foods derived from biotechnology that countries may use in establishing their own product approval regulations. The United States and the EU already have in place very different regulatory frameworks for approving new agricultural biotech products or genetically modified organisms. The United States applies existing food safety and environmental protection laws and regulations to biotech products, and makes decisions on approvals based on the characteristics of products rather than whether they are derived from biotechnology. In order to evaluate new products, U.S. regulators require sufficient evidence to determine their safety or risk. Some of this evidence is developed through testing. Under this approach, the United States has approved most new biotech varieties to date. The EU, on the other hand, has established a distinct regime for regulating biotech products and since 1998 has not approved for marketing any new genetically modified organisms. Based on a concept the EU calls the “precautionary principle,” the European Commission maintains that approval of new biotechnology products should not proceed if there is “insufficient, inconclusive or uncertain” scientific data regarding potential risks. U.S. regulators stress that they also consider scientific evidence and exercise precaution in evaluating new products derived from biotechnology. U.S. officials note, however, that the EU’s “precautionary principle” may allow product approval decisions to be influenced by political considerations. Failure of the EU to approve new products is affecting the viability of biotech trade in other parts of the world. For example, given the importance of the EU market, U.S. soybean producers have been reluctant to introduce new biotech varieties that have not been approved for marketing in the EU. Similarly, corn growers in Argentina, who export to the EU, are deferring planting a biotech variety known as “Round-up Ready” corn because the EU has not approved it. In advance of international guidelines, the EU, Japan, and Korea have already passed regulations requiring labels for food and food ingredients derived from biotechnology (see slide 6). These three countries are all significant markets for U.S. agricultural exports. Several other countries, including Australia, New Zealand, and Mexico, are also taking action to adopt such labeling requirements. U.S. officials have raised concerns that such regulations, depending on how they are crafted, could significantly increase production costs and disrupt trade. U.S. producers argue that a label identifying foods as derived from biotechnology is likely to be construed by consumers as a warning label, inhibiting demand for these products. Ultimately, if food producers seeking to avoid such labels reject biotech-derived ingredients, grain handlers may be compelled to separate conventional products from biotech varieties, which would raise handling and documentation costs considerably. Labeling requirements also raise questions about threshold levels for biotech ingredients in food. It would not be possible for many foods to avoid labeling requirements that set a zero tolerance for the presence of biotech ingredients, according to U.S. officials. This is primarily because of the comingling of conventional and biotech varieties in the U.S. grain handling system. In the case of Japan, at least, USDA believes that U.S. products will be able to comply with its new labeling rules because foods containing less than a 5-percent threshold of biotech ingredients do not require labeling. More highly processed products, such as seed oils, are exempt from Japan's labeling requirement because they have no detectable trace of genetic modification. The Codex Food Labeling Committee is currently in the process of developing international guidelines for countries that choose to establish mandatory labeling of food and food ingredients obtained through biotechnology. The U.S. delegation has supported a Codex guideline for mandatory labeling only when biotech-derived foods differ significantly from corresponding conventional foods in composition, nutritional value, or intended use. Draft language under consideration in the committee also includes an option for mandatory labeling based on the method of production, even if there is no detectable presence of DNA or protein in the end product resulting from the genetic modification. The U.S. delegation, led by FDA, has opposed this language. The committee remains deadlocked on this issue and has been for several years. “Traceability” is a concept that forms the basis for a proposed EU regulation of agricultural biotech products that could affect U.S. exports (see slide 7). This regulation would require documentation tracing biotech products through each step of the grain handling and food production processes. Currently, no countries have enacted traceability requirements. The European Commission is expected to adopt new regulations on both traceability and labeling requirements for foods and animal feed that contain biotech ingredients or are derived from biotechnology later in 2001. Under these proposed rules, margarine made from soybean oil, for example, would require documentation to identify whether it contains or was derived from a conventional or biotech soybean variety. If the oil was obtained from a biotech soybean variety, the margarine would have to be labeled, even though the oil may not contain detectable traces of modified DNA or protein. After the Commission adopts the regulations, it will forward them to EU legislative bodies for final approval, a process that may take up to a year or more. The EU has also pushed for traceability rules to be included in Codex guidelines and in the Biosafety Protocol's pending rules for documentation of bulk commodity grain shipments. The U.S. government has opposed the inclusion of traceability requirements for biotechnology products in these multilateral discussions. U.S. government officials maintain that traceability requirements could significantly disrupt trade while having no compelling public health benefit. Moreover, U.S. industry groups are concerned about the burden these new regulations would place on the U.S. grain handling and food production systems because of the associated documentation requirements and the need to segregate biotech from conventional crop varieties. Corn and soybeans are the principal U.S. commodity exports most threatened by foreign regulations governing biotech products (see slide 8). While exports of both crops are mainly destined for animal feed, these crops face notable differences in overseas markets. Corn exports have already experienced significant losses. From average annual sales of about $300 million in the mid-1990s, U.S. corn exports to the EU have dropped to less than $10 million in recent years. This decline is primarily because new biotech corn varieties have been introduced into production in the United States that have not been approved in the EU. Since it is possible that traces of biotech varieties not approved for marketing in the EU could be present in any shipment of U.S. corn, exporters have opted to discontinue most corn exports to Europe. While the EU has never accounted for more than 5 percent of the world market for U.S. corn, Asian and Latin American countries purchase more than three-quarters of U.S. corn exports. Recently some of the largest markets in these regions—Japan, Korea, and Mexico—have taken action to enact regulatory measures that would require labeling of biotech foods and food ingredients. U.S. industry representatives note that labeling requirements in these countries may adversely impact the marketability of products with a biotech component and present additional difficulties for U.S. corn exports. Unlike corn, U.S. soybean exports have not yet experienced disruptions. As noted above, U.S. soybean exports to the EU are primarily intended for animal feed. The European market is much more important for U.S. soybean exports than it is for corn. U.S. soybean producers have been more restrained about introducing biotech varieties that have not been approved in the EU. Currently, only one biotech variety of soybeans is in general production in the United States, and it has been approved in the EU and most other major markets. However, U.S. officials note that regulations on labeling and traceability now being considered in Europe may pose a threat to future soybean exports even if no new biotech varieties are introduced. This is because for the first time these regulations are expected to apply to animal feed as well as to food meant for human consumption. The United States faces a number of challenges to maintaining access to markets for biotech crops and foods containing or derived from agricultural biotechnology products (see slide 9). Among these challenges are the EU's moves to establish labeling and traceability requirements and gain recognition of the “precautionary principle” in various international organizations. U.S. and industry representatives are concerned that some developing countries may use the EU regulatory framework as the basis for their own regulations on agricultural biotechnology products. They also fear that some foreign governments' lack of experience regulating this new technology may lead them to impose rules that would restrict trade in a manner inconsistent with their WTO obligations. The United States is relatively isolated on biotech trade issues since currently only a few other countries produce or export these commodities. According to U.S. officials, other countries tend to view biotech as primarily a bilateral trade problem between the United States and the EU. Furthermore, since the United States is not a party to the U.N. Convention on Biological Diversity, U.S. participation will be limited in future Biosafety Protocol discussions, including those regarding bulk commodity shipments. Growing consumer concerns, particularly in Europe, about the safety of biotechnology underlie actions taken by foreign governments that may restrict biotech trade. EU and U.S. officials note that recent food safety scares involving “mad cow” disease and dioxin and the ineffective response to these incidents by certain EU member governments have undermined European consumers' confidence in their food safety regulatory system. Consequently, according to these officials, consumers in Europe question the capacity of regulatory authorities to ensure food safety, and even though these scares were not associated with biotechnology, European attitudes toward biotech foods have been adversely impacted. Some consumer groups contend that there are uncertainties about the risks and benefits of biotech foods, and they are not satisfied with existing U.S. health and environmental safety regulations. Moreover, the first generation of biotech products has primarily provided benefits for producers (such as lower pest management costs and enhanced yields)—-not consumers. Recognizing this, the agricultural biotech industry is now promoting the potential benefits to consumers of the next generation of products, particularly improved nutritional content. However, such products have yet to be marketed and may not be for a number of years. Thus, the potential benefits to consumers are not yet well defined. The difficulty grain handlers encounter in trying to completely separate biotech from conventional varieties poses an additional challenge. This problem was highlighted by last year's discovery in U.S. supermarkets of foods containing a biotech corn variety known as StarLink. StarLink had been approved in the United States only for animal feed but found its way into processed foods, as well as into grain shipments to Korea and Japan where the product was not approved. According to industry representatives, the competitive advantage of the U.S. grain handling system results from the comingling of bulk commodity crops, including conventional and biotech varieties. Any regulatory measure that would ultimately lead to segregation or traceability would raise handling costs and potentially undermine the efficiency and competitiveness of this system, they maintain. While growers generally support biotechnology, some actors in the agricultural sector, notably exporters, have been critical of biotech companies for marketing varieties in the United States that have not yet been approved in major market countries. Another challenge is the ability of U.S. government agencies to address other countries' new biotech regulations as they arise and protect U.S. interests in multilateral organizations in matters affecting biotech trade. Given the numerous international discussions in Codex committees and elsewhere, the U.S. government must contend with an increasing demand for staff resources devoted to biotech trade issues. U.S. officials have also highlighted the need for greater outreach to countries participating in these talks or considering their own biotech regulations. Such outreach efforts place an additional burden on agency resources. Finally, the number of U.S. trade and regulatory agencies with biotech-related roles, both domestically and internationally, creates a challenge for effective coordination. For example, there are several different U.S. government agencies representing U.S. interests in international organizations on biotech issues and working with other countries bilaterally, including USTR, USDA, FDA, and State. Their efforts require extensive interagency coordination in order to develop and carry out consistent U.S. positions on these issues. We obtained oral comments on a draft of this report from the Office of the U.S. Trade Representative, including the Director for Sanitary and Phytosanitary Affairs. We also obtained oral comments from the Department of Agriculture's Foreign Agricultural Service. The agencies provided technical comments that we incorporated as appropriate. To meet our objectives of (1) summarizing developments in key international organizations and among major U.S. trading partners that are likely to affect agricultural biotech trade; (2) identifying principal U.S. commodities most affected by foreign regulations on biotechnology exports; and (3) describing challenges U.S. biotech exporters face in maintaining access to foreign markets, we studied official documents from various U.S. federal agencies and foreign governments. We did not, however, independently review all foreign government rules or regulations affecting biotech imports. We examined statements by industry groups and nongovernmental organizations, as well as academic studies that addressed agricultural biotechnology trade issues. We interviewed U.S. officials from relevant agencies, including USTR, USDA, FDA, EPA, and the Departments of State and Commerce. We also met with USTR, USDA, and State Department officials in Brussels and Geneva. We met with a cross- section of industry groups, including representatives of growers, processors, exporters, food manufacturers, and biotech companies. In addition, we attended three conferences on agricultural biotechnology issues, and met with agency officials assigned to U.S. delegations to Codex. Our focus was on challenges encountered by U.S. agricultural biotech exports. Pharmaceutical products derived from biotechnology were not part of our review. Moreover, we did not address the appropriateness of U.S. or foreign regulatory measures regarding biotech products. We conducted our work from October 2000 through May 2001 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Honorable Ann Veneman; Secretary of Agriculture; the Honorable Robert B. Zoellick, U.S. Trade Representative; the Honorable Colin L. Powell, Secretary of State; the Honorable Tommy Thompson, Secretary of Health and Human Services; and the Honorable Christine Todd Whitman, Administrator, Environmental Protection Agency. Copies will be made available to other interested parties upon request. If you or your staff have any questions concerning this report, please call me at (202) 512-4347. Additional GAO contacts and staff acknowledgments are listed in appendix V. What is Agricultural Biotechnology? Agricultural biotechnology is a collection of scientific techniques, such as genetic Agricultural biotechnology is a collection of scientific techniques, such as genetic engineering, used to modify plants, animals, or microorganisms by introducing in them engineering, used to modify plants, animals, or microorganisms by introducing in them desired traits, including characteristics from unrelated species. For example, traits may desired traits, including characteristics from unrelated species. For example, traits may be introduced to facilitate pest management and improve yield or nutritional value. be introduced to facilitate pest management and improve yield or nutritional value. To date the principal biotech products marketed have been certain genetically To date the principal biotech products marketed have been certain genetically engineered field crops. The United States is by far the world’s largest producer of engineered field crops. The United States is by far the world’s largest producer of biotech crops. biotech crops. Argentina: 17% (soybeans) Canada: 10% (canola) China: ~1% (cotton) U.S: 72% (soybeans, corn, cotton, & others) *Based on USDA National Agricultural Statistical Service’s June 2000 Acreage Report. Photo source: USDA. biotechnology. agreements. agreements. products. products. *APHIS: Animal and Plant Health Inspection Service; FAS: Foreign Agricultural Service. Codex: Sets international food safety standards recognized under the WTO Sanitary and Phytosanitary (SPS) agreement. Active discussions related to biotech are taking place in several Codex committees. USDA manages overall U.S. participation in Codex. USDA and FDA lead U.S. delegations to Codex committees. Biosafety Protocol: Environmental agreement under the U.N. Convention on Biological Diversity, covering the transshipment and use of living modified organisms. Protocol takes effect upon ratification by 50 countries. The United States has not ratified the Convention nor signed the Protocol. State Department represented U.S. interests at Biosafety Protocol negotiations. consistent with WTO disciplines. WTO disciplines. WTO: Provides institutional framework for multilateral trade. Trade disciplines established under the SPS and Technical Barriers to Trade (TBT) agreements and the General Agreement on Tariffs and Trade (GATT) are related to biotech trade issues. USTR represents U.S. interests at WTO. Some foreign countries have not approved for marketing certain biotech products that have been approved in the United States. Resistance to new product approvals in the EU has affected U.S. exports and biotech trade in other parts of the world. Product approval regulations must be clear, transparent, timely, science-based, and predictable. U.S. regulators have concluded that approved biotech foods on the market now are as safe as their conventional counterparts. Photo source: USDA. Strict labeling requirements could impact U.S. exports because they could reduce consumer demand and increase costs. Mandatory labeling should only be implemented when the new biotech product represents a significant change from the conventional variety or poses a threat to consumer safety. FDA has recently proposed voluntary labeling guidelines. Various countries have taken action to enact mandatory labeling requirements (shaded areas on map) EU is pushing for traceability requirements to track biotech products throughout the production and distribution chains. However, the implementation cost to producers may be prohibitive. A costly and onerous traceability system is not justified because biotech products are not inherently less safe than other foods. U.S. officials have opposed traceability requirements in Codex. Photo source: USDA. Corn and soy exports are most threatened by foreign regulations on biotech products. Because the U.S. grain handling system comingles biotech and conventional products, restrictions on biotech varieties affect nearly all exports of these commodities. Corn and soy exports are most threatened by foreign regulations on biotech products. Because the U.S. grain handling system comingles biotech and conventional products, restrictions on biotech varieties affect nearly all exports of these commodities. U.S. U.S. Photo source: USDA. In addition to the persons named above, Howard Cott, Jody Woods, Richard Seldin, and Janey Cohen made key contributions to this report. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are also accepted. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St., NW (corner of 4th and G Sts. NW) Washington, DC 20013 Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm E-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system)
This report reviews the challenges facing U.S. agricultural biotechnology products in international trade. GAO found that new regulations and guidelines that may restrict U.S. exports of crops with a large biotech component are being enacted or considered by some U.S. trading partners and are also under discussion in various international organizations. These actions address approval, labeling, and traceability of agricultural biotech products. U.S. corn and soybean exports are most threatened by new foreign regulatory measures because of their biotech content. Although U.S. soybean exports have not yet experienced disruptions, U.S. corn exports have been largely shut out of the European Union (EU) market because U.S. farmers are producing some biotech varieties that have not been approved for marketing in the EU. U.S. agricultural biotech exports face several significant challenges in international markets. First, as the single major producer of biotech products, the United States has been relatively isolated in its efforts to maintain access to markets for these products. Second, in many parts of the world, consumer concerns are growing about the safety of biotech foods, which have led key market countries to implement or consider regulations that may restrict U.S. biotech exports. Another challenge is that U.S. industry combines conventional and biotech grain in the distribution chain. Consequently, foreign regulations governing biotech varieties could affect all U.S. exports of these commodities. Finally, as international negotiations in Codex Alimentarius and elsewhere take on greater importance, the U.S. government faces increasing demands for staff resources and coordination among the multiple agencies involved in biotech trade issues.
FDA and FSIS must approve the release of the products they regulate before importers can distribute them in the domestic market. These agencies inspect products to ensure that they comply with U.S. food safety requirements. FDA electronically screened all 2.7 million entries of imported foods under its jurisdiction in fiscal year 1997 and physically inspected about 1.7 percent, or 46,000, of them. FSIS visually inspected all 118,000 entries of imported meat and poultry under its jurisdiction in calendar year 1997 and conducted physical examinations on about 20 percent of them. Importers must post bonds with Customs to allow them to move the shipment from the port. The bond amount is intended to cover any duties, taxes, and penalties. Importers generally obtain continuous bonds that provide coverage for multiple shipments over a specified time period. The amount of a continuous bond is based primarily on a percentage of duties paid in the previous year. Importers can also purchase bonds for single shipments (single-entry bonds) in an amount 3 times the declared value of the shipment. Once Customs reviews entry documents and verifies the bond, it conditionally releases the shipment to the importer. After the conditional release, FSIS and FDA exercise different controls over the shipment, according to their statutory and regulatory authorities. FSIS generally requires the importers of the products it regulates to deliver them to approved import inspection facilities for storage until the products are released or refused entry. If FSIS refuses entry, it notifies the importer, who must arrange for reexport, destruction, or conversion to animal food within 45 days. The shipment is not released from FSIS’ custody until the importer presents documents to FSIS showing that arrangements have been made. In contrast, under the Federal Food, Drug, and Cosmetics Act, as amended (FFDCA), importers are allowed to retain custody of food imports subject to FDA regulation in their own warehouses throughout the entire import process, from pick-up at the port of entry to release, destruction, or reexport. FDA releases most shipments without inspection. If FDA decides to examine a shipment, it asks the importer to make the shipment available for inspection at a place of the importer’s choosing. If FDA refuses to allow the shipment to enter the United States as a result of this inspection, it notifies Customs and the importer and gives the importer 90 days to reexport or destroy the refused shipment. FDA’s decision to refuse entry may occur immediately after inspection or may occur several days or weeks after a sample is collected, when laboratory results become available. If a shipment is not presented for inspection as requested by FDA or FSIS or is refused entry by FDA or FSIS, Customs is to notify the importer through a redelivery notice to (1) make the shipment available for FDA or FSIS inspection or (2) redeliver the refused shipment for Customs’ supervised reexport or destruction. Customs can penalize an importer that fails to (1) make a shipment available for inspection, (2) destroy or reexport a refused shipment within the time frame set out in the Customs redelivery notice, or (3) dispose of the shipment under Customs’ supervision. Customs initially assesses penalties at the maximum amount allowed—3 times the value of the shipment declared on the Customs entry form, up to the amount of available bond coverage. According to Customs’ guidelines, Customs must follow FDA’s penalty recommendation when an importer fails to redeliver a refused shipment for export or destruction. Customs may reduce the penalty when the shipment is returned (1) late but disposed of under Customs’ supervision or (2) on time but not disposed of under Customs’ supervision. According to Customs officials, they cannot impose penalties if Customs does not issue a redelivery notice to the importer within 120 days of the FDA refusal date. Weak and inconsistently applied controls have allowed some FDA-regulated imported foods that violate U.S. food safety requirements to enter domestic commerce. This occurs when either (1) importers circumvent required inspections or fail to properly dispose of shipments refused entry or (2) federal agencies do not work together to ensure that these shipments are disposed of properly. Although importers are subject to penalties for circumventing inspection and disposal orders, we found such penalties may not effectively deter violations because the penalties are too low and at times are not imposed at all and therefore fail to serve as a deterrent. Unscrupulous importers bypass FDA inspections of imported food shipments or circumvent requirements for reexporting or destroying food shipments that were refused entry, according to Customs and FDA officials at the ports we visited. This occurs, in large part, because, under FFDCA, importers are allowed to maintain custody of their shipments throughout the import process. Additionally, (1) FDA does not require shipments to have unique identifying marks that would aid in ensuring that other products are not substituted for those targeted for inspection or disposal and (2) importers, under FFDCA, are allowed a long period of time to redeliver refused shipments to Customs for disposal, which facilitates substitution by unscrupulous importers. Recognizing this problem, Customs has conducted and is still conducting operations at a number of ports to detect importers that attempt to circumvent inspection and disposal requirements. For example, in a San Francisco operation that started in October 1996 and was known as “Shark Fin,” Customs and FDA found that importers had diverted trucks en route to inspection stations so that suspect products could be substituted with acceptable products. According to Customs investigators, the operation revealed that six importers were sharing the same acceptable product when they had to present a shipment for inspection—a practice known as “banking.” In a follow-up operation in San Francisco, known as “Operation Bad Apple” and started in July 1997, Customs and FDA found a number of substitution and other problems, such as invoices that falsely identified the product. Customs’ concerns were further validated when this second operation found that 40 of the 131 importers investigated had import shipments with discrepancies, such as product substitution and false product identification. According to a Customs official, 10 of the importers were previously identified as suspicious, while the other 30 importers had been considered reliable until the investigation. Identifying the substitution of products prior to inspection is difficult and labor-intensive, according to FDA and Customs port officials. Because FDA-regulated imports do not have unique identification marks that associate a shipment with the import entry documents filed with Customs, extra efforts are required to identify substitution, such as marking or documenting the products at the port before they are released to the importer, then checking the products when they are presented for inspection. FDA and Customs officials believed that placing additional staff at the ports for such efforts, as in the San Francisco operations, could not be sustained as a normal practice, given the resources required and other priorities. Substitution problems have also occurred after inspections, when importers are ordered to redeliver refused shipments to the port for destruction or reexport. Three of the eight ports we reviewed routinely examined FDA-regulated shipments delivered for reexport or destruction to detect substitution, according to Customs and FDA officials. At two of these ports—New York and Blaine—Customs found that substitution had occurred on outbound shipments. For example, in New York, Customs instituted a procedure in 1997 to physically examine selected food shipments that were refused entry and were scheduled for reexport. Officials began this procedure after periodic examinations found that some importers had substituted garbage for the refused shipments that were being reexported. For the 9-month period of October 1, 1997, through June 30, 1998, Customs found discrepancies in 31 of the 105 FDA-refused shipments it examined. Nine of the discrepancies were for product substitution and 22 were for shortages—only part or none of the refused shipment was in the redelivered containers. For example, in one instance, the importer presented hoisin sauce for reexport that had a later production date than the date of the entry into the United States on the original refused shipment. Customs officials believed that the importer distributed the original refused shipment into domestic commerce and substituted the hoisin sauce to avoid detection and penalty. At the other five ports, Customs does not systematically examine the shipments delivered for disposal to detect substitution or only examines them for destruction. For example, at Laredo, Customs officials said they only review the documents provided by the importer and do not examine the shipment to verify that the products being reexported or destroyed are the same products that were refused entry. At Miami, Seattle, and Los Angeles, Customs or FDA officials may examine some products presented for destruction, but, as at the Laredo port, only review the documents provided by the importer to verify the export of refused shipments. At San Francisco, a Customs official told us that he reviews the paperwork on the refused shipment and the paperwork on the shipment presented for destruction or reexport. None of the five ports routinely physically examined the export shipments to ensure they contained the products that were refused entry and listed on the export documents. Customs officials told us they do not have enough time for inspectors to verify each shipment presented for destruction or reexport, given the number of refused shipments and other priorities. A number of factors contribute to FDA’s and Customs’ problems in ensuring that targeted shipments are actually inspected and that refused entries are properly disposed of. First, under FFDCA, importers are allowed to maintain custody of their shipments throughout the import process, thus providing importers with the opportunity to circumvent controls. Second, imported food shipments under FDA’s jurisdiction are not required to contain unique identification marks. As a result, it is difficult to verify whether the FDA-regulated shipments presented for inspection were the actual shipments being imported or whether refused shipments were destroyed or reexported. Furthermore, when FDA determines that a shipment is unsafe, FDA does not mark the shipment to show it was refused entry. In contrast, FSIS requires that imported food shipments under its jurisdiction contain unique identifying marks and are retained under its custody until disposal, and when it refuses entry, it stamps each carton “U.S. Refused Entry.” Without such markings, Customs and FDA have less assurance that an importer will not substitute products either before inspection or, in the case of refusal, before redelivery for export or destruction. Furthermore, there is no assurance that an importer will not reimport a refused shipment at a later date. Third, under FFDCA, importers of FDA-regulated products are given 90 days to redeliver refused shipments for proper disposal, which is twice the amount of time that FSIS regulations give importers of FSIS-refused shipments. According to Customs and FDA officials, allowing an importer up to 90 days to dispose of refused products while retaining custody of the shipment provides more time for the importer to arrange for substitution. That is, unscrupulous importers will distribute into domestic commerce shipments refused entry and substitute for reexport a shipment that arrives at a later date. At five of the eight ports we examined, Customs and FDA do not effectively coordinate their efforts to ensure that importers are ordered to redeliver refused shipments for disposal. At two of these ports—Los Angeles and New York—Customs was unaware of FDA’s refusal notices for 61 to 68 percent of the shipments we reviewed. At the other three—Laredo, Pharr, and Seattle—the lack of coordination appears to be less problematic. Nonetheless, as a result of these coordination problems at the five ports, Customs had not issued notices of redelivery to the importers. In contrast, at Miami, San Francisco, and Blaine, Customs and FDA officials coordinate their efforts to issue refusal notices and redelivery notices through joint agency teams or regular reconciliation of records. (See app. I for information we collected on each port’s FDA-refused shipments.) Refused shipments that are not properly disposed of are likely to have entered domestic commerce. For example, according to a New York Customs official, over three-quarters of the cases we reviewed in which Customs did not have an FDA refusal notice—48 out of 63—were presumably released into commerce because Customs did not issue a notice to the importer to redeliver the shipment. In Los Angeles, we found that Customs had not issued a redelivery notice and had no records of disposal for 21 out of 54 shipments we reviewed. Some of these refused shipments that may have been released into commerce posed serious health risks: 11 of the 48 New York cases and 8 of the 21 Los Angeles cases were refused by FDA because they contained salmonella, a bacteria that can cause serious illness. It is unclear why Customs was not aware of all the imported food shipments refused entry by FDA. While FDA officials told us they either mailed or hand-delivered notices of refusal to Customs, Customs officials said they did not receive them. Nonetheless, Customs should have been aware of a coordination problem because importers sometimes returned shipments for disposal after receiving a refusal notice from FDA but without having received a Customs redelivery notice. For example, at New York, we found indications that importers returned shipments for destruction or reexport in 15 of the 63 cases in which Customs did not issue a redelivery notice. At Miami, San Francisco, and Blaine, Customs and FDA officials work together to ensure that required redelivery notices are issued on FDA-refused entries. In Miami, a joint Customs-FDA team sends out a single notice to the importer stating that the shipment has been refused entry and that the importer must return it for proper disposal within 90 days. In San Francisco and Blaine, the agencies reconcile their refusal and redelivery notice records each week. As a result of their efforts, we found that Customs was aware of FDA’s refusal notices at these three ports in about 95 percent of the cases we reviewed. Although we found that Customs was frequently not aware of FSIS-refused shipments, we did not find comparable problems of imported food products being distributed domestically after they had been refused entry. According to FSIS officials, when FSIS rejects a shipment, it only notifies the importer of the refusal. The importer, in turn, must notify Customs of the refusal and obtain Customs’ authorization to destroy or export the shipment, but this information often does not reach Customs’ files. In Seattle, for example, of the 15 FSIS cases we reviewed, Customs could not locate files for 7 cases, and only 3 of the remaining 8 case files at Customs contained records of FSIS refusals or Customs notices of redelivery. Despite this apparent lack of coordination, we found records at the FSIS import inspection facility that indicated the refused shipments were disposed of properly. We believe that FSIS’ controls over import shipments—requiring unique markings on each carton, retaining custody of shipments until they are approved for release or properly disposed of, and stamping “U.S. Refused Entry” on rejected shipments—reduced opportunities to bypass import controls. Customs’ penalties for failure to redeliver refused shipments do not effectively deter violations because they are either too low compared with the value of the product or not imposed at all, according to Customs and FDA officials at the ports we reviewed. According to these officials, importers often view these penalties as part of the cost of doing business. Some officials believe importers consider the amount of the penalty from one violation will be covered by the gains made from other shipments that manage to enter commerce. Although violations for failure to redeliver shipments for which Customs issued a redelivery notice are initially assessed at 3 times the declared value of the shipment, an importer could still profit from the sale of a refused shipment even after buying the product and paying a full penalty for failure to redeliver. For example, we found that the wholesale market price for a 10-pound carton of Guatemalan snow peas ranged from $13 to $15, while the declared value of a 10-pound carton in one refused shipment was $0.75 per carton and the assessed penalty was $2.25 per carton. Thus, in this case, the wholesale value was four to five times the maximum penalty. In some cases, Customs did not impose the maximum allowable penalty—3 times the shipment’s declared value—because the penalty exceeded the value of the bond that the importer had posted. At least 16 of the 162 penalty cases identified by Customs in Miami and 7 of the 50 cases we reviewed in New York had lower penalties imposed because of insufficient bond coverage. In Miami, for example, the importer of a shipment of swordfish that was refused entry for excessive levels of mercury but not redelivered as required could have been assessed a penalty in excess of $110,000, but the importer was actually assessed a penalty of only $50,000—the value of the bond. Customs and FDA officials said the bond amount may not cover the maximum penalty because most importers obtain continuous bonds, whose value is set as a percentage of duties paid in the prior year and is not tied to the declared value of the entries in the current year. According to Customs officials in Miami and New York, if the importer has a history of violations, Customs may require the importer to post single-entry bonds for additional entries. At three ports—Los Angeles, San Francisco, and Seattle—Customs did not assess as severe a penalty as agency guidelines suggested because officials at these ports were unable to identify repeat offenders and penalize them accordingly. For example, port officials in Seattle said the computer system that records violation information is difficult to access for identifying repeat offenders, given other priorities. Prior to April 1998, Customs officials for the Laredo and Pharr ports said they could not identify repeat offenders for the same reasons. However, New York, Miami, and Blaine maintained their own records on violations and repeat offenders and usually followed Customs guidelines when assessing penalties on repeat offenders in the cases we reviewed. Finally, Customs officials said they cannot impose penalties in many cases we reviewed because the agency did not issue a redelivery notice to the importer within 120 days of the FDA refusal date. For example, in Los Angeles, we found that 11 cases had refusal notices over 120 days but did not have redelivery notices. Although some importers reexport or destroy their shipments after receiving only the FDA refusal notice, importers that do not redeliver the refused product will not incur a penalty. From their experience, Customs officials believe that in such cases importers distribute the product. Customs and FDA officials and importer association representatives suggested ways to strengthen controls over imported foods as they move through Customs’ and FDA’s import procedures. Some of the more promising suggestions are discussed below. Each of these suggested approaches has advantages and disadvantages, costs, or limitations that would have to be considered before any changes are made. For certain importers that FDA believes are more likely than others to violate import controls because they have a history of violations, Customs and FDA could work together to ensure that substitution does not occur before either inspection or disposal. For example, FDA could target importers, and Customs could order that these importers’ shipments be delivered by bonded truckers to an independent, Customs-approved, bonded warehouse pending inspection. Although FDA can request Customs to require importers to present shipments for inspection at a bonded warehouse, it does not routinely use this authority and make such requests. In Los Angeles, for example, FDA officials said they have had Customs make an importer present a shipment to a bonded warehouse only once in the past 2 years. Given their concerns about importers circumventing federal controls over imported foods, Customs and FDA officials at San Francisco and Miami are considering implementing variations on this option. For example, in Miami, Customs and FDA officials are developing a program to require importers of FDA-refused shipments to deliver them into the custody of a centralized examination station, a type of bonded warehouse, for disposal, rather than allowing the importer to retain custody. This approach has the advantage of preventing the targeted importers from bypassing inspection controls and of ensuring the proper disposal of the targeted importers’ shipments that were refused entry. Furthermore, this approach would serve as a deterrent to importers likely to violate requirements because they would have to pay the additional costs associated with unloading a shipment and storing it at a bonded warehouse. Moreover, this approach would not require any change in Customs’ authority. Customs currently uses bonded warehouses for its own inspections and could, at FDA’s request, require targeted importers to use bonded warehouses. This approach also has several limitations. First, it does not cover all importers. While ideally it would be preferable to monitor all importers, it may not be practicable because the costs to law-abiding importers would also increase. Second, even if Customs and FDA focused only on problem importers, the agencies would need to develop a coordinated system to identify them. Similarly, this approach would depend on effective coordination after such identification—FDA would need to request Customs to maintain control of a shipment, and Customs would have to act accordingly. As we have noted, effective coordination between FDA and Customs does not always occur. Customs and FDA could take steps to better ensure that importers with a history of violations are not substituting products before inspection and are not returning the actual refused cargo for destruction or reexport by adopting variations on controls used by FSIS for meat and poultry imports. To help prevent substitution before inspection, FDA could require the shipments of importers or products with a history of violations to have unique identification marks on each product container and on entry documents filed with Customs. To help ensure that shipments refused entry are destroyed or reexported, FDA could stamp “refused entry” on each carton/container in shipments that it finds do not meet U.S. food safety requirements. Requiring certain targeted shipments to have unique identification marks would have the advantage of enabling FDA inspectors to better verify that the products presented for inspection were the same products identified on Customs entry documents and help Customs inspectors verify that shipments refused entry were disposed of properly. Similarly, stamping refused entries would increase the likelihood that they were actually destroyed or reexported and reduce the likelihood that reexported products would reenter the country at a later time. However, these procedures might be difficult to implement. Requiring unique identification marks on imports (1) would require FDA to develop and implement a marking and labeling system for the wide variety of imported food products from many different countries that it regulates and (2) might negatively affect trade. Furthermore, a requirement to stamp refused entries would be labor-intensive for FDA because FDA, unlike FSIS, does not always have custody of the shipments at the time of refusal and would have to travel to the storage location to stamp the cartons. Customs and FDA could develop a method of ensuring that importers whose shipments are refused entry into the United States are issued notices to redeliver their cargo. Two approaches were suggested to us. First, Customs could retrieve information from its own database on FDA’s refusals. Customs records all import shipments in its Automated Commercial System (ACS), and FDA communicates its refusal notice to the importer through ACS. Currently, however, Customs’ system is not programmed to identify FDA refusals. Second, in lieu of the first approach, or until this approach is implemented, Customs and FDA could work out a manual system, such as reconciling FDA refusal and Customs redelivery notices. Either of these approaches has the obvious advantage of ensuring that Customs is promptly aware of all FDA refusals so that it can issue redelivery notices. The database approach, however, would require some reprogramming of ACS to enable Customs to access FDA’s refusals as well as training of Customs officials to ensure that they know how to use the software. The second approach would also address the coordination problem but would require more staff time. The Congress could reduce the time allowed for redelivery of FDA-regulated shipments to require importers to dispose of refused shipments more quickly and more in line with the other agencies. By statute, importers of FDA-regulated foods are allowed 90 days to redeliver products after being issued the notice of refusal, in contrast to importers of FSIS-regulated foods, which are allowed a 45-day redelivery period. FDA officials at two ports said the longer time period is intended to give importers enough time to arrange export shipping of refused shipments. In New York, however, Customs officials said some importers use the longer time period to obtain products to substitute for the refused shipments. The advantage of this approach would be to reduce the opportunity for importers to distribute the products into domestic commerce or to prepare substitute products for disposal. However, importers would have less time to consolidate refused entries with other exports, which may increase their shipping costs. Reducing the redelivery period would also require changes in FDA’s statutory authority. Under Customs’ current practices, penalties can be lower than the wholesale market value of a shipment and therefore not effectively prevent refused imported foods from entering domestic commerce. To create a more effective deterrent, Customs could take one or more of the following suggested actions. First, Customs could increase the continuous bond requirement for importers with a history of violations so that the bond would cover potentially higher penalties. Rather than base the calculation for continuous bonds primarily on duties paid in the previous year, Customs could adjust the formula to include the history of violations and damages assessed during the earlier period. Second, Customs could require importers with a history of violations to post separate, single-entry bonds for each import shipment. The single-entry bond amount is 3 times the declared value of the shipment. Finally, Customs could impose higher penalties on repeat violators, as allowed by its own guidelines, by providing the means for Customs staff to identify importers with a history of violations. Currently, Customs cannot always identify repeat offenders. These approaches have the advantage of creating a more significant monetary disincentive to importers considering circumventing federal controls. The first two approaches would impose higher costs on repeat violators because they involve added expenses in increasing the level of a continuous bond or purchasing individual bonds for each shipment. The final approach would enable Customs to follow its own guidelines when assessing penalties on repeat violators. The first two approaches, however, would require additional work by Customs staff at each port to review and set bond requirements. The last approach would require Customs to correct deficiencies in its penalty database to allow Customs staff to identify repeat violators. This concludes my prepared testimony. I would be happy to respond to any questions that you and Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed: (1) the extent to which federal controls ensure that food importers present shipments for inspection when required and that shipments refused entry are destroyed or reexported; and (2) ways to strengthen these controls. GAO noted that: (1) the Food and Drug Administration's (FDA) controls provide little assurance that shipments targeted for inspection are actually inspected or that shipments found to violate U.S. safety standards are destroyed or reexported; (2) because importers, rather than FDA, retain custody over shipments throughout the import process, some importers have been able to provide substitutes for products targeted for inspection or products that have been refused entry and must be reexported or destroyed, according to Customs Service and FDA officials; (3) moreover, Customs and FDA do not effectively coordinate their efforts to ensure that importers are notified that their refused shipments must be reexported or destroyed; (4) Customs' penalties for violating inspection and disposal requirements may provide little incentive for compliance because they are too low in comparison with the value of the imported products or they are not imposed at all; (5) as a result of these weaknesses, shipments that failed to meet U.S. safety standards were distributed in domestic commerce; (6) because the Food Safety and Inspection Service (FSIS) requires unique identification marks on, and maintains custody of, each shipment of imported foods under its jurisdiction, GAO did not find similar weaknesses in FSIS' controls over the shipments reviewed, although GAO did identify some coordination problems between FSIS and Customs; (7) federal controls would be strengthened by consistently implementing current procedures and by adopting new procedures; (8) Customs and FDA officials and representatives of importer and broker associations identified a number of ways to improve agencies' controls over incoming shipments, strengthen interagency coordination, and provide stronger deterrents against repeat violators; and (9) each of these approaches has advantages and disadvantages that should be considered before making any changes.
Successful implementation of VA’s information technology program requires strong leadership and management to help define and guide the department’s plans and actions. The Paperwork Reduction Act of 1980 and the Clinger-Cohen Act of 1996 articulate the importance of CIOs in promoting improvements in their agencies’ work processes and making sound investment decisions that effectively align IT projects with the organization’s business planning and measurement processes. To be successful in this role, CIOs must build credible organizations and develop and organize information management capabilities to meet agency mission needs. With the hiring of a department-level CIO in August 2001, VA took a significant step toward addressing critical and longstanding weaknesses in its management of information technology. Our prior work has highlighted some of the challenges that the CIO faced as a result of the way in which the department was organized to carry out its information technology mission. Among these challenges was that information systems and services were highly decentralized, with the VA administrations and staff offices controlling a majority of the department’s information technology budget. As illustrated in figure 1, out of the approximately $1.25 billion fiscal year 2002 information technology budget, the Veterans Health Administration (VHA) oversaw approximately $1.02 billion, VBA approximately $158.3 million, and the National Cemetery Administration (NCA) approximately $5.87 million. The remaining $60.2 million was controlled at the department level. In addition, our testimony in March noted that there was neither direct nor indirect reporting to VA’s cyber security officer—the department’s senior security official—thus raising questions about this person’s ability to enforce compliance with security policies and procedures and ensure accountability for actions taken throughout the department. The more than 600 information security officers in VA’s three administrations and its many medical facilities throughout the country were responsible for ensuring the department’s information security, although they reported only to their facility’s director or to the chief information officer of their administration. Given the large annual funding base and decentralized management structure, it is crucial that the CIO ensure that well-established and integrated processes for leading, managing, and controlling investments are commonplace and followed throughout the department. The Secretary has recognized weaknesses in accountability for the department’s information technology resources and the consequent need to reorganize how information technology is managed and financed. Accordingly, in a memorandum dated August 6, 2002, he announced a realignment of the department’s information technology operations. According to the memorandum, the realignment will centralize information technology functions, programs, workforce personnel, and funding into the office of the department-level CIO. In particular, several significant changes are being made: The CIOs in each of the three administrations—VHA, VBA, and NCA— have been designated deputy CIOs and will report directly to the department-level CIO. Previously, these officials served as component- level CIOs who reported only to their respective administrations’ undersecretaries. All administration-level cyber security functions have been consolidated under the department’s cyber security office, and all monies earmarked for these functions have been placed under the authority of the cyber security officer. Information security officers previously assigned to VHA’s 21veterans integrated service networks will now report directly to the cyber security officer, thus extending the responsibilities of the cyber security office to the field. Beginning in fiscal year 2003, the department-level CIO will assume executive authority over VA’s IT appropriations. The realignment had not been finalized at the conclusion of our review, thus its full impact on VA’s mission and the CIO’s success in managing information technology at the department level could not yet be measured. Nonetheless, in pursuing these reforms, the Secretary has demonstrated the significance of establishing an effective management structure for building credibility in the way information technology is used, and has taken a significant step toward achieving a “One VA” vision. The Secretary’s initiative also represents a bold and innovative step by the department, and is one that has been undertaken by few other federal agencies. For example, as part of our review, we sent surveys to the 23 other major federal agencies, seeking information on the organization and reporting relationships of their department- and component-level CIOs. Of the 17 agencies that responded, 8 reported having component-level CIOs, none of which reported to the department-level CIO. Only one agency with component-level CIOs reported that its department-level CIO had authority over all IT funding. As the realignment proceeds, the CIO’s success in managing information technology operations will hinge on effective collaboration with business counterparts to guide IT solutions that meet mission needs. Guidance that we issued in February 2001 on the effective use of CIOs in several leading private and public organizations provides insight into three key factors contributing to CIO successes: First, senior executives embrace the central role of technology in accomplishing mission objectives and include the CIO as a full participant in senior executive decision-making. Second, effective CIOs have legitimate and influential roles in leading top managers to apply IT to business problems and needs. While placement of the CIO position at an executive management level in the organization is important, effective CIOs earn credibility and produce results by establishing effective working relationships with business unit heads. Third, successful CIOs structure their organizations in ways that reflect a clear understanding of business and mission needs. Along with business processes, market trends, internal legacy structures, and available IT skills, this understanding is necessary to ensure that the CIO’s office is aligned to best serve the needs of the enterprise. VA’s new organizational structure holds promise for building a more solid foundation for investing in and improving the department’s accountability over information technology resources. Specifically, under the realignment the CIO assumes budget authority over all IT appropriations, including authority to veto proposals submitted from sub-department levels. This could have a significant effect on VA’s accountability for how components are spending money, as we have previously noted the department’s inability to adequately capture all of its IT costs. As the first step toward gaining accountability for information technology investments, the CIO is attempting to determine what expenditures have been incurred in fiscal year 2002. Since VA’s annual budget submissions to OMB have not included a specific line item for information technology operations, the CIO has asked each administration to provide accurate information identifying the costs incurred by each of them for this fiscal year. According to the CIO, preliminary results showed that certain non-IT costs, such as for users’ personnel, had been included in the total expenditures, while some IT costs, such as for IT personnel and telecommunications, had been excluded. The CIO’s goal is to compile cost data that accurately reflect the department’s information technology expenditures. In the absence of a budget line item, the CIO is requiring each facility to develop “spend plans” for fiscal year 2003 IT funding. These plans are expected to serve as a control mechanism for information technology expenditures during the year and will be administered by each facility, with the CIO retaining veto power over them. The plans have been designed to provide the CIO with investment cost details at a departmentwide level, allowing for a portfolio-based project selection process and lessening duplication of effort. Once the plans are implemented, the CIO anticipates being able to compare planned and actual expenditures and to uncover the details of specific projects. Developing and implementing an enterprise architecture to guide VA’s information technology activities continues to be an essential and challenging undertaking. VA and other federal agencies are required to develop and implement enterprise architectures to provide a framework for evolving or maintaining existing and planned IT, in accordance with OMB guidelines. In addition, guidance issued last year by the Federal CIO Council, in collaboration with us, further emphasizes the importance of enterprise architectures in evolving information systems, developing new systems, and inserting new technologies that optimize an organization’s mission value. Overall, effective implementation of an enterprise architecture can facilitate VA’s management by serving to inform, guide, and constrain the information technology investment decisions being made for the department, and subsequently decreasing the risk of buying and building systems that are duplicative, incompatible, and unnecessarily costly to maintain and interface. As depicted in figure 2, the enterprise architecture is both dynamic and iterative, changing the enterprise over time by incorporating new business processes, new technology, and new capabilities. Depending on the size of the agency’s operations and the complexity of its environment, enterprise architecture development and implementation require sustained attention to process management and agency action over an extended period of time. Once implemented, the enterprise architecture must be kept current through regular maintenance. Periodic reassessments are required to ensure that it remains aligned with the department’s strategic mission and priorities, changing business practices, funding profiles, and technology innovation. When we testified last March, VA had taken a number of promising steps toward establishing some of the core elements of an enterprise architecture. Among other actions, it had obtained executive commitment from the Secretary, department-level CIO, and other senior executives and business teams that is crucial to raising awareness of and leveraging participation in developing the architecture. VA had also chosen a highly recognized framework to organize the structure of its enterprise architecture. Further, it had begun defining its current architecture, an important step for ensuring that future progress can be measured against such a baseline, and it was developing its future (target) telecommunications architecture. Nonetheless, at that time we noted that VA still faced many more critical tasks to successfully develop, implement, and manage its enterprise architecture. One of the key activities that required attention was the establishment of a program management office headed by a permanent chief architect to manage the development and maintenance of the enterprise architecture. In addition, the department needed to complete a program management plan delineating how it would develop, use, and maintain the architecture. Further, although VA had developed a baseline application inventory to describe its “as is” state, it had not completed validating the inventory or developing detailed application profiles for the inventory, including essential information such as business functions, information flows, and external interface descriptions. Over the past 6 months, VA has made substantial strides toward instituting its enterprise architecture program. For example, in April it issued its fiscal year 2002 One VA enterprise architecture implementation plan, which will be used to align integrated technology solutions with the department’s business needs. And in July, the CIO issued a mandatory directive prescribing departmentwide policy for the establishment and implementation of an integrated One VA enterprise architecture and to guide the development and management of all of VA’s IT assets. VA also finalized its enterprise architecture communications plan that will be used to help business and IT management and staff develop a corporate model of customer service. More recently, on September 5, the Secretary approved the initial version of the department’s One VA enterprise architecture. VA officials describe the architecture as a top-down, business-focused document that provides a blueprint for systematically defining and documenting the department’s desired (target) environment. The document provides a high-level, overarching view of the department’s “as is” enterprise business functions and key enabling functions. VA’s work to develop the “as is” view revealed the complexities of its baseline information systems, work processes, and supporting infrastructure. For example, it identified over 30 independently designed and operated data networks, over 200 independent external network connections, over 1,000 remote access system modem connections, and a total of 7,224 office automation servers that are currently part of the baseline environment. The enterprise architecture document also incorporates high-level versions of a sequencing plan, technical reference model, and standards profile—all of which are critical to ensuring the complete development and implementation of the architecture. A sequencing plan serves as a systems migration roadmap to provide the agency with a step-by-step process for moving from the baseline to the target architecture. The technical reference model provides a knowledge base for a common conceptual framework, defines a common vocabulary and set of services and interfaces, and serves as a tool for the dissemination of technical information across the department. The standards profile, used in conjunction with the technical reference model, assists departmental components in coordinating the acquisition, development, and interoperability of systems to accomplish the department’s enterprise architecture program goals. Further, VA has integrated security practices into the initial version of its enterprise architecture. These security practices provide a high-level description of the baseline and target distributed systems architectures for major elements of the department’s cyber security infrastructure. Even with notable progress, VA must nonetheless complete a number of additional actions to fully implement and effectively manage its enterprise architecture. With the Federal CIO Council’s guide as a basis for analysis, table 1 illustrates the progress that the department has made since March in accomplishing key enterprise architecture process steps, along with examples of the various critical actions still required to successfully implement and sustain its enterprise architecture program. As the table indicates, immediate attention still needs to be focused on acquiring a permanent chief architect to manage the development and maintenance of the enterprise architecture. Currently, the chief technology officer serves as the acting chief architect while the department recruits someone to fill the position on a permanent basis. According to the acting chief architect, VA anticipates filling the position in early 2003. The enterprise architecture program management office likewise needs to be fully staffed. As of September 6, 5 of the office’s 16 positions had been filled. Officials expect this office to be fully staffed by the end of this year. Instituting a permanent chief architect with the requisite core competencies to lead the enterprise architecture development and fully staffing the enterprise architecture program office to support the effort, will provide vital components of management and oversight necessary for a successful enterprise architecture program. Two quality assurance roles—those of risk manager and configuration manager—also still need to be filled. At the conclusion of our review, VA’s Enterprise Architecture Council was performing risk and configuration management and its Information Technology Board was performing quality assurance functions. However, Federal CIO Council guidance recommends that the CIO make risk and configuration management the explicit responsibilities of individuals designated for those roles. The guide further recommends that the CIO establish an independent quality assurance function to evaluate the enterprise architecture. VA must also still develop a program management plan to delineate how it will develop, use, and maintain the enterprise architecture. Such a plan is integral to providing definitive guidance for effectively managing the enterprise architecture program. Beyond these actions, VA must continue to enhance the enterprise architecture that it has begun instituting. For example, additional work is needed to fully develop the baseline and target architectures to encompass all of the department’s business functions, identify common areas of business, and eliminate duplication of processes across the organization through business process reengineering. As the initial version of the enterprise architecture notes, significant process duplication exists across the department. For example, VA identified eight different ways in which registration and eligibility are determined in the “as-is” (baseline) architecture. Nonetheless, although VA recognized opportunities for integrating and consolidating the department’s duplicate processes and functions, its initial enterprise architecture document lacked any specific guidance on how and when consolidation and integration will take place. Also, important to the success of an enterprise architecture effort is a fully-developed enterprise architecture repository. Such a system serves to highlight information interdependencies and improves the understandability of information across an organization. It also helps to significantly streamline change control by establishing linkages among the information, facilitating impact analyses, and providing for ready evaluations of change proposals. Although VA’s enterprise architecture repository contains information reflecting the views of its business planners and owners, the department still needs to completely populate the repository with data that describe the interrelationships among all information elements and work products. The acting chief architect stated that, in fiscal year 2003, the department will assess its need for a different system to serve as the EA repository. As establishment of the enterprise architecture proceeds, VA also will need to further refine its sequencing plan to identify differences between baseline and target architectures and gaps in the process, and to assess the state of legacy, migration, and new systems, and budget priorities and constraints. In addition, the acting chief architect noted that the current version of the technical reference model is generic and will require further development. Such customization is important in order to provide VA with consistent sets of service areas and interface categories and relationships used to address interoperability and open systems issues and serve as a basis for identifying, comparing, and selecting existing and emerging standards and their relationships. Such a document can also be used to organize infrastructure documentation. According to VA officials, actions to refine and build upon the enterprise architecture are ongoing, and the department plans to issue an interim revision to the initial document within 4 to 6 months, and a completely new version by July 2003. The Enterprise Architecture Council will be responsible for developing these products. As the enterprise architecture management program moves forward, the department must ensure that it continues to sufficiently address and complete all critical process steps outlined in the federal CIO guidance within reasonable time frames. With enhanced management capabilities provided by an enterprise architecture framework, VA should be able to (1) better focus on the strategic use of emerging technologies to manage its information, (2) achieve economies of scale by providing mechanisms for sharing services across the department, and (3) expedite the integration of legacy, migration and new systems. VA’s information security continues to be an area of significant concern. The department relies extensively on computer systems and telecommunications networks to meet its mission of providing health care and benefits to veterans. VA’s systems support many users, its networks are highly interconnected, and it is moving increasingly to more interactive, Web-based services to better meet the needs of its customers. Effectively securing these systems and networks is critical to the department’s ability to safeguard its assets, maintain the confidentiality of sensitive medical information, and ensure the reliability of its financial data. As this subcommittee is well aware, VA has faced long-standing challenges in achieving effective computer security across the department. Since 1998 we have reported on wide-ranging deficiencies in the department’s computer security controls. Among the weaknesses highlighted was that VA had not established effective controls to prevent individuals from gaining unauthorized access to its systems and sensitive data. In addition, the department had not provided adequate physical security for its computer facilities, assigned duties in a manner that segregated incompatible functions, controlled changes to its operating systems, or updated and tested its disaster recovery plans. Similar weaknesses have been confirmed by VA’s inspector general, as well as through the department’s own assessments of its computer security controls in response to government information reform legislation. As evidence, since September 2001, VA has self-reported approximately 27,000 control weaknesses related to physical and logical access, segregation of duties, system and application controls, and continuity of operations. As of August 31, 2002, according to VA, about half (14,000) of these weaknesses remained unresolved. Contributing significantly to VA’s computer security problems has been its lack of a fully implemented, comprehensive computer security management program—essential to managing risks to business operations that rely on its automated and highly interconnected systems. Our 1998 report on effective security management practices used by several leading public and private organizations and a companion report on risk-based security approaches in 1999 identified key principles that can be used to establish a management framework for more effective information security programs. This framework, depicted in figure 3, points to five key areas of effective computer security program management—central security management, security policies and procedures, risk-based assessments, security awareness, and monitoring and evaluation. Leading organizations we examined applied these key principles to ensure that information security addressed risks on an ongoing basis. Further, these principles have been cited as useful guidelines for agencies by the Federal CIO Council and incorporated into the council’s information security assessment framework, intended for agency self-assessments. When we testified before the subcommittee in March, VA had begun a number of actions to strengthen its overall computer security management posture. For example, the Secretary had instituted information security standards for members of the department’s senior executive service to provide greater management accountability for information security. In addition, VA’s cyber security officer had organized his office to focus more directly on the critical elements of information security control that are defined in our information systems controls audit methodology. The cyber security officer also had updated the department’s security management plan, outlining actions for developing risk-based security assessments, improving the monitoring and testing of systems controls, and implementing departmentwide virus-detection software and intrusion- detection systems. The plan placed increased emphasis on centralizing key security functions that were previously decentralized or nonexistent, including virus detection, systems certification and accreditation, network management, configuration management, and incident and audit analysis. Nonetheless, while VA had completed a number of important steps, its security management program continued to lack essential elements required for protecting the department’s computer systems and networks from unnecessary exposure to vulnerabilities and risks. For example, while the department had begun to develop an inventory of known security weaknesses, it had not instituted a comprehensive, centrally managed process that would enable it to identify, track, and analyze all computer security weaknesses. Further, the updated security management plan did not articulate critical actions that VA would need to take to correct specific control weaknesses or time frames for completing key actions. Since March, the department has taken important steps to further strengthen its computer security management program. For example, the cyber security officer has updated and expanded the department’s information security policies and procedures, placing increased emphasis on better securing and overseeing the department’s computer environment. More recently, as discussed earlier, VA’s realignment of its information technology resources placed administration and field office security functions more directly under the oversight of the department’s CIO. VA has also acted to help provide a more solid foundation for detecting, reporting, and responding to security incidents. For example, it has contracted to acquire an expanded departmentwide incident response and analysis capability, to include enhanced security monitoring and detection. Further, it has enhanced its computer virus detection program by providing technical training to operational staff and distributing antivirus patches for known viruses to affected systems. In addition, VA has initiated a multiyear project intended to consolidate, protect, and centrally manage external connections to its critical financial, medical, and benefits systems. This project, with full implementation planned for September 2004, is expected to reduce the approximately 200 external computer network connections that the department now relies on to about 10. By reducing these connections, VA should be better positioned to effectively reduce its risk of unauthorized access to its critical systems. As was the case last March, however, VA’s actions have not yet been sufficient to fully implement all of the key elements of a comprehensive computer security management program. In assessing the department’s recent corrective actions relative to our information security risk management framework, VA still needs to accomplish a number of critical tasks that are essential to successfully achieving a comprehensive and effective computer security management program. Table 2 summarizes the steps that VA still needs to accomplish in order to fully implement a comprehensive program. The department’s critical remaining actions include routinely monitoring and evaluating the effectiveness of security policies and controls and acting to address identified weaknesses. These tasks aid organizations in cost effectively managing their information security risks rather than reacting to individual problems after a violation has been detected. We have previously recommended that VA establish a program involving ongoing monitoring and evaluation to ensure the effectiveness of its computer control environment. An effective program framework would include a description of the scope and level of testing to be performed, specific control areas to be tested, the frequency of testing, and the identity of responsible VA units. In addition, testing and evaluation would include penetration tests and reviews of the computer network, as well as compliance reviews of all computer control areas, including logical and physical access controls; service continuity tests; and system and application integrity and change controls performed on a scheduled basis. VA has begun placing greater emphasis on controlling its security risks; however, its current framework does not yet include some of the essential elements required to achieve a formal program for monitoring and evaluating computer controls. For example, while the department has conducted some tests of its control environment, including penetration tests and reviews of its computer network, this effort has largely been performed in an ad hoc manner, rather than as part of a formal, ongoing program. Further, while VA has established a departmental process for assessing computer controls, the process relies on VA’s offices to self- report computer control weaknesses, with no independent validation component to ensure the accuracy of reporting. Similarly, an effective computer security management program should include a process for ensuring that remedial action is taken to address significant deficiencies and that it provides steps to analyze weaknesses reported for identifiable trends and vulnerabilities, and to apply appropriate countermeasures as needed. Although VA has established a system for tracking corrective actions, it has not developed a process for independently validating or reviewing the appropriateness of the corrective actions taken. Further, the department currently lacks a process to routinely analyze the weaknesses reported, limiting its effectiveness at identifying systemic problems that could adversely affect critical veterans information systems departmentwide. Finally, although VA has developed a framework for addressing departmentwide computer security, it has not yet established a mechanism for collecting and tracking performance data, ensuring management review when appropriate, or providing for independent validation of program deliverables. Until it addresses all key elements of a comprehensive computer security management program and develops a process for managing the department’s security plan, VA will not have full assurance that its financial information and sensitive medical records are adequately protected from unauthorized disclosure, misuse, or destruction. Mr. Chairman, we continue to be concerned about the slow progress that VBA is making in implementing the VETSNET compensation and pension replacement system. As you know, VBA currently relies on its aging Benefits Delivery Network to deliver over 3.5 million benefits payments to veterans and their dependents each month. The compensation and pension replacement effort grew out of an initiative that VBA undertook in 1986 to replace its outdated BDN and modernize its compensation and pension, education, and vocational rehabilitation benefits payment systems. After several false starts and approximately $300 million spent on the overall modernization, the administration revised its strategy in 1996 and began focusing on modernizing the compensation and pension (C&P) payment system. VBA has now been working on the C&P replacement initiative for more than 6 years, but continues to be far from full implementation of the new payment system. As we reported last March, long-standing, fundamental deficiencies in VBA’s management of the project hindered successful development and implementation of the system. For example, the initiative was proceeding without a project manager, and VBA had not obtained essential field office support for the new software being developed. In addition, users’ requirements for the new system had not yet been assessed or validated to ensure that VETSNET would meet business needs; and testing of the system’s functional business capability, as well as end-to-end testing to ensure that accurate payments would be delivered, still needed to be completed. Finally, VBA had not developed an integrated project plan to guide its transition from BDN to the new system. This past June, we recommended that, before approving any new funding for the replacement system, the Secretary should ensure that actions are taken to address our long-standing concerns about VBA’s development and implementation of the system. These recommended actions included (1) appointing a project manager to direct the development of an action plan for, and oversee the complete analysis of, the current system replacement effort; (2) finalizing and approving a revised C&P replacement strategy based on results of the analysis and implementing an integrated project plan; (3) developing an action plan to move VBA from the current to the replacement system; and (4) developing an action plan to ensure that BDN will be available to continue accurately processing benefits payments until the new system is deployed. The department concurred with our recommendations, and stated that actions were either under way or planned to implement them. Since our March testimony and subsequent recommendations, VBA has acted to further its development and implementation of the C&P replacement system. Among these actions VBA began recruiting a full-time project manager in June, and, according to the deputy CIO for VBA, expects to fill this position by the end of this month. In addition, to obtain field office and program support, in late March VBA formalized an implementation charter that established a VETSNET executive board and a project control board. These entities are expected to provide decision support and oversee progress on the implementation. VBA has also begun revalidating functional business requirements for the new system. Its July 10, 2002 status report called for validating the majority of its requirements by the end of this month, and to complete all requirements validation by January 2003. The report also identified actions needed to transition VBA from the current to the replacement system. Further, in July VBA hired a contractor to obtain support for testing the VETSNET system applications. The contractor has been tasked with conducting functional, integration, and linkage testing, as well as software quality assurance for each release of the system applications. Nonetheless, VBA still has significant work to accomplish, and completing its implementation of the new system could take several years. All but one of the software applications comprising the new system still need to be fully deployed or developed, and VBA is currently processing only nine benefits claims using its new software products. As described in VA’s August 2002 Compensation and Pension Replacement System Capital Asset Plan, the C&P replacement strategy incorporates six software applications: (1) Share, (2) Modern Award Processing - Development, (3) Rating Board Automation 2000, (4) Award Processing, (5) Finance and Accounting System, and (6) Correspondence. These applications are being designed to support the processing of initial benefits claims for service- connected disabilities, as shown in table 3. VBA still has numerous tasks to accomplish before these software applications can be fully implemented. Although, last year, the administration implemented its rating board automation tool (RBA 2000), it will not require all of its regional offices to use this software until July 2003. In addition, our recent follow-up work determined that two of the software products continue to be in various stages of deployment. Specifically, among the 57 regional offices that are expected to benefit from the replacement system, only 6 are currently using Share to establish a claim; VBA still needs to implement the tool in the 51 other regional offices. In addition, only two regional offices—Salt Lake and Little Rock— have pilot-tested and are currently using MAP-D to assist in the development of most compensation claims. VBA still needs to implement this tool in 55 other regional offices. Full implementation is currently estimated for October 2003. Further, three software applications—AWARD, FAS, and Correspondence—continue to require development. According to VBA officials, when implemented, AWARD will record award decisions and generate, authorize, and validate on-line awards for veterans and interface with Correspondence to develop the notification letter for the veteran. FAS will provide the accounting benefits payments functions and will include an interface with the Department of the Treasury. VBA expects to complete software coding for AWARD and FAS by March 2003. Based on its most recent estimates, it expects to begin nationwide deployment of the two systems in April 2004. Once these activities are accomplished, VBA plans to begin its conversion to the new system, with a completion date currently set for December 2004. Figure 4 depicts VBA’s current time line for the full implementation of the system. Given its current schedule for implementing the C&P replacement system, VBA will have to continue relying on BDN to deliver compensation and pension benefits payments until at least the beginning of 2005. However, with parts of this system nearing 40 years old, without additional maintenance, BDN’s capability to continue accurately processing benefits payments is uncertain. Our concerns have been substantiated by the VA claims processing task force, which in its October 2001 report warned that the system’s operations and support were approaching a critical stage and that its performance could potentially degrade and eventually cease. Since March, VBA has taken steps to help ensure that BDN can be sustained and remains capable of making prompt, uninterrupted payments to veterans. For example, VBA has (1) completed an upgrade of BDN hardware, (2) hired 11 new staff members dedicated to BDN operations, and (3) successfully tested a contingency plan. Further, according to VBA’s deputy CIO, the administration has developed an action plan outlining strategies for keeping BDN operational until the replacement system is implemented. Nonetheless, the risks associated with continual reliance on BDN remain—one of the system’s software applications (database monitor software) is no longer supported by the vendor, nor is it used by any other customer. Finally, Mr. Chairman, I would like to provide updated information on VA’s progress, in conjunction with the Department of Defense (DOD) and the Indian Health Service (IHS), in achieving the ability to share patient health care data as part of the government computer-based patient record (GCPR) initiative. As you know, the GCPR project was developed in 1998 out of VA and DOD discussions about ways to share data in their health information systems and from efforts to create electronic records for active duty personnel and veterans. IHS became involved because of its experience in population-based research and its long-standing relationship with VA in caring for the Indian veteran population, as well as its desire to improve the exchange of information among its facilities. GCPR was originally envisioned to serve as an electronic interface that would allow physicians and other authorized users at VA, DOD, and IHS health facilities to access data from any of the other agencies’ health facilities by serving as an electronic interface among their health information systems. The interface was expected to compile requested patient information in a temporary, “virtual” record that could be displayed on a user’s computer screen. Last March we expressed concerns about the progress that VA, DOD, and IHS had made toward implementing GCPR. We testified that the project continued to operate without clear lines of authority or a lead entity responsible for final decision-making. The project also continued to move forward without comprehensive and coordinated plans, including an agreed-upon mission and clear goals, objectives, and performance measures. These concerns were originally reported in April 2001, when we recommended that the participating agencies (1) designate a lead entity with final decision-making authority and establish a clear line of authority for the GCPR project, and (2) create comprehensive and coordinated plans that included an agreed-upon mission and clear goals, objectives, and performance measures, to ensure that the agencies can share comprehensive, meaningful, accurate, and secure patient health care data. VA, DOD, and IHS all agreed with our findings and recommendations. Our March testimony also noted that the scope of the GCPR initiative had been narrowed from its original objectives and that the participating agencies had announced a revised strategy that was considerably less encompassing than the project was originally intended to be. Specifically, rather than serve as an interface to allow data sharing across the three agencies’ disparate systems, as originally envisioned, a first (near-term) phase of the revised strategy had called only for a one-way transfer of data from DOD’s current health care information system to a separate database that VA hospitals could access. Subsequent phases of the effort that were to further expand GCPR’s capabilities had also been revised. A second phase that would have enabled information exchange among all three agencies had been re- scoped to enable only a bilateral read-only exchange of data between VA and IHS. Plans for a third phase involving the expansion of GCPR’s capabilities to public and private national health information standards groups were no longer being considered for the project, and there were no plans for DOD to receive data from VA. In May, VA and DOD proceeded with implementing the revised strategy. It finalized a memorandum of agreement that designated VA as the lead entity in implementing the project and formally renamed the project the Federal Health Information Exchange (FHIE) Program. According to program officials, FHIE is now a joint effort between DOD and VA that will enable the exchange of health care information in two phases. The first phase, or near-term solution, is to enable the one-way transfer of data from DOD’s existing health care information system to a separate database that VA hospitals can access. Nationwide deployment and implementation of the first phase began in late May of this year, and was completed in mid- July. FHIE was built to interface with VA’s and DOD’s existing systems. Specifically, electronic data from separated service members contained in DOD’s Military Health System Composite Health Care System are transmitted to VA’s FHIE repository, which can then be accessed through the Computerized Patient Record System (CPRS) in VA’s Veterans Health Information Systems and Technology Architecture (VISTA). Clinicians are able to access and display the data through CPRS remote data views. The data currently available for transfer include demographic and certain clinical information, such as laboratory results, outpatient pharmacy data, and radiology reports on service members that have separated from DOD. The final phase of the near-term solution is anticipated to begin this October. According to VA and DOD officials, this phase is intended to broaden the base of health information available to VA clinicians through the transfer of additional health information on separated service members. This additional information is expected to consist of discharge summaries; allergy information; admissions, disposition, and transfer information; and consultation results that include referring physicians and physical findings. Completion of this final phase of FHIE is scheduled for September 2003. VA and DOD have budgeted $12 million in fiscal year 2003 ($6 million for each agency) to cover completion and maintenance of the near-term effort. FHIE is currently available to all VA medical centers, and according to program officials, is showing positive results. The officials stated that, presently, the FHIE repository contains data on almost 2 million unique patients. This includes clinical data on over 1 million service personnel who separated between 1987 and 2001. The data consist of over 14 million lab messages, almost 14 million pharmacy messages, and over 2 million radiology messages. Program officials stated that the quick retrieval and readability of data contained in the FHIE repository has begun providing valuable support to VA clinicians. They stated that FHIE is capable of accommodating up to 800 queries per hour, with an average response rate of 14 seconds per query. For the week beginning July 29, 2002, VA clinicians made 287 authorized queries to the database. In addition, when a clinician at a VA medical facility retrieves the data transmitted from DOD, the data appear in the same format as the data captured in CPRS, further facilitating its use. During a demonstration of the data retrieval capability, a clinician at VA’s Washington, D.C., medical center told us that the information provided through FHIE has proven particularly valuable for treating emergency room and first-time patients. He added that additional data anticipated from the second phase of FHIE should prove to be even more valuable. Beyond FHIE, VA and DOD have envisioned a long-term strategy involving the two-way exchange of clinical information. This initiative has been termed HealthePeople (Federal). According to VHA’s CIO and the Military Health System CIO, VA and DOD are jointly implementing a plan that will result in computerized health record systems that ensure interoperability between DOD’s Composite Health Care System II and VA’s HealtheVet VISTA to achieve the sharing of secure health data required by their health care providers. In order to accomplish this objective, the two agencies intend to standardize health and related data, communications, security, and software applications where appropriate. As part of HealthePeople (Federal), IHS is also expected to be actively involved in helping to develop national standards and compatible software applications to further the standardization of data, communications, and security for health information systems. When our review concluded, VA and DOD had just begun this initiative, with a focus on addressing the standardization issue. At that time, they anticipated implementing this exchange of clinical information by the end of 2005.
In March of this year, GAO testified before the House Subcommittee on Oversight and Investigations, Committee on Veterans' Affairs, about the Department of Veterans Affairs' (VA) information technology (IT) program, and the strides that the Secretary had made in improving departmental leadership and management of this critical area--including the hiring of a chief information officer. At the Subcommittee's request, GAO evaluated VA's new IT organizational structure, and provided an update on VA's progress in addressing other specific areas of IT concern and our related recommendations pertaining to enterprise architecture, information security, the Veterans Benefits Administration's replacement compensation and pension payment system and maintenance of the Benefits Delivery Network, and the government computer-based patient record initiative. Since our March testimony, VA has made important progress in its overall management of information technology. For example, the Secretary's decision to centralize IT functions, programs, and funding under the department-level CIO holds great promise for improving the accountability and management of IT spending--currently over $1 billion per year. But in this as well as the other areas of prior weakness, the strength of VA's leadership and continued management commitment to achieving improvements will ultimately determine the department's degree of success. As for its progress in other areas includes: enterprise architecture: the Secretary recently approved the initial, "as is" version of this blueprint for evolving its information systems, focused on defining the department's current environment for selected business functions. VA still, however, needs to select a permanent chief architect and establish a program office to facilitate, manage, and advance this effort. Information security: steps have been taken that should help provide a more solid foundation for detecting, reporting, and responding to security incidents. Nonetheless, the department has not yet fully implemented a comprehensive computer security management program that includes a process for routinely monitoring and evaluating the effectiveness of security policies and controls, acting to address identified vulnerabilities. Compensation and pension payment system: while some actions have been taken, after more than 6 years, full implementation of this system is not envisioned before 2005; this means that the 3.5 million payments that VA makes each month will continue to depend on its present, aging system. Government computer-based patient record initiative: VA and the Department of Defense have reported some progress in achieving the capability to share patient health care data under this program. Since March, the agencies have formally re named the initiative the Federal Health Information Exchange and have begun implementing a more narrowly defined strategy involving a one-way information transfer from Defense to VA; a two-way exchange is planned by 2005.
According to the Institute of Medicine, the federal government has a central role in shaping nearly all aspects of the health care industry as a regulator, purchaser, health care provider, and sponsor of research, education, and training. According to HHS, federal agencies fund more than a third of the nation’s total health care costs. Given the level of the federal government’s participation in providing health care, it has been urged to take a leadership role in driving change to improve the quality and effectiveness of medical care in the United States, including expanded adoption of IT. In April 2004, President Bush called for the widespread adoption of interoperable electronic health records within 10 years and issued an executive order that established the position of the National Coordinator for Health Information Technology within HHS as the government official responsible for the development and execution of a strategic plan to guide the nationwide implementation of interoperable health IT in both the public and private sectors. In July 2004, HHS released The Decade of Health Information Technology: Delivering Consumer-centric and Information-rich Health Care—Framework for Strategic Action. This framework described goals for achieving nationwide interoperability of health IT and actions to be taken by both the public and private sectors in implementing a strategy. HHS’s Office of the National Coordinator for Health IT updated the framework’s goals in June 2006 and included an objective for protecting consumer privacy. It identified two specific strategies for meeting this objective—(1) support the development and implementation of appropriate privacy and security policies, practices, and standards for electronic health information exchange and (2) develop and support policies to protect against discrimination based on personal health information such as denial of medical insurance or employment. In July 2004, we testified on the benefits that effective implementation of IT can bring to the health care industry and the need for HHS to provide continued leadership, clear direction, and mechanisms to monitor progress in order to bring about measurable improvements. Since then, we have reported or testified on several occasions on HHS’s efforts to define its national strategy for health IT. We have recommended that HHS develop the detailed plans and milestones needed to ensure that its goals are met and HHS agreed with our recommendation and has taken some steps to define more detailed plans. In our report and testimonies, we have described a number of actions that HHS, through the Office of the National Coordinator for Health IT, has taken toward accelerating the use of IT to transform the health care industry, including the development of its framework for strategic action. We have also described the Office of the National Coordinator’s continuing efforts to work with other federal agencies to revise and refine the goals and strategies identified in its initial framework. The current draft framework— The Office of the National Coordinator: Goals, Objectives, and Strategies—identifies objectives for accomplishing each of four goals, along with 32 high-level strategies for meeting the objectives, including the two strategies for protecting consumer privacy. Federal health care reform initiatives of the early- to mid-1990s were inspired in part by public concern about the privacy of personal medical information as the use of health IT increased. Congress, recognizing that benefits and efficiencies could be gained by the use of information technology in health care, also recognized the need for comprehensive federal medical privacy protections and consequently passed the Health Insurance Portability and Accountability Act of 1996 (HIPAA). This law provided for the Secretary of HHS to establish the first broadly applicable federal privacy and security protections designed to protect individual health care information. HIPAA required the Secretary of HHS to promulgate regulatory standards to protect certain personal health information held by covered entities, which are certain health plans, health care providers, and health care clearinghouses. It also required the Secretary of HHS to adopt security standards for covered entities that maintain or transmit health information to maintain reasonable and appropriate safeguards. The law requires that covered entities take certain measures to ensure the confidentiality and integrity of the information and to protect it against reasonably anticipated unauthorized use or disclosure and threats or hazards to its security. HIPAA provides authority to the Secretary to enforce these standards. The Secretary has delegated administration and enforcement of privacy standards to the department’s Office for Civil Rights and enforcement of the security standards to the department’s Centers for Medicare and Medicaid Services. Most states have statutes that in varying degrees protect the privacy of personal health information. HIPAA recognizes this and specifically provides that its implementing regulations do not preempt contrary provisions of state law if the state laws impose more stringent requirements, standards, or specifications than the federal privacy rule. In this way, the law and its implementing rules establish a baseline of mandatory minimum privacy protections and define basic principles for protecting personal health information. The Secretary of HHS first issued HIPAA’s Privacy Rule in December 2000, following public notice and comment, but later modified the rule in August 2002. Subsequent to the issuance of the Privacy Rule, the Secretary issued the Security Rule in February 2003 to safeguard electronic protected health information and help ensure that covered entities have proper security controls in place to provide assurance that the information is protected from unwarranted or unintentional disclosure. The Privacy Rule reflects basic privacy principles for ensuring the protection of personal health information. Table 1 summarizes these principles. HHS and its Office of the National Coordinator for Health IT have initiated actions to identify solutions for protecting health information. Specifically, HHS awarded several health IT contracts that include requirements for developing solutions that comply with federal privacy and security requirements, consulted with the National Committee on Vital and Health Statistics (NCVHS) to develop recommendations regarding privacy and confidentiality in the Nationwide Health Information Network, and formed the American Health Information Community (AHIC) Confidentiality, Privacy, and Security Workgroup to frame privacy and security policy issues and identify viable options or processes to address these issues. The Office of the National Coordinator for Health IT intends to use the results of these activities to identify technology and policy solutions for protecting personal health information as part of its continuing efforts to complete a national strategy to guide the nationwide implementation of health IT. However, HHS is in the early stages of identifying solutions for protecting personal health information and has not yet defined an overall approach for integrating its various privacy-related initiatives and for addressing key privacy principles. HHS awarded four major health IT contracts in 2005 intended to advance the nationwide exchange of health information—Privacy and Security Solutions for Interoperable Health Information Exchange, Standards Harmonization Process for Health IT, Nationwide Health Information Network Prototypes, and Compliance Certification Process for Health IT. These contracts include requirements for developing solutions that comply with federal privacy requirements. The contract for privacy and security solutions is intended to specifically address privacy and security policies and practices that affect nationwide health information exchange. HHS’s contract for privacy and security solutions is intended to provide a nationwide synthesis of information to inform privacy and security policymaking at federal, state, and local levels and the Nationwide Health Information Network prototype solutions for supporting health information exchange across the nation. In summer 2006, the privacy and security solutions contractor selected 34 states and territories as locations in which to perform assessments of organization-level privacy- and security-related policies and practices that affect interoperable electronic health information exchange and their bases, including laws and regulations. The contractor is supporting the states and territories as they (1) assess variations in organization-level business policies and state laws that affect health information exchange, (2) identify and propose solutions while preserving the privacy and security requirements of applicable federal and state laws, and (3) develop detailed plans to implement solutions. The privacy and security solutions contractor is to develop a nationwide report that synthesizes and summarizes the variations identified, the proposed solutions, and the steps that states and territories are taking to implement their solutions. It is also to deliver an interim report to address policies and practices followed in nine domains of interest: (1) user and entity authentication, (2) authorization and access controls, (3) patient and provider identification to match identities, (4) information transmission security or exchange protocols (encryption, etc.), (5) information protections to prevent improper modification of records, (6) information audits that record and monitor the activity of health information systems, (7) administrative or physical security safeguards required to implement a comprehensive security platform for health IT, (8) state law restrictions about information types and classes and the solutions by which electronic personal health information can be viewed and exchanged, and (9) information use and disclosure policies that arise as health care entities share clinical health information electronically. These domains of interest address the use and disclosure and security privacy principles. In June 2006, NCVHS, a key national health information advisory committee, presented to the Secretary of HHS a report recommending actions regarding privacy and confidentiality in the Nationwide Health Information Network. The recommendations cover topics that are, according to the committee, central to challenges for protecting health information privacy in a national health information exchange environment. The recommendations address aspects of key privacy principles including (1) the role of individuals in making decisions about the use of their personal health information, (2) policies for controlling disclosures across a nationwide health information network, (3) regulatory issues such as jurisdiction and enforcement, (4) use of information by non- health care entities, and (5) establishing and maintaining the public trust that is needed to ensure the success of a nationwide health information network. The recommendations are being evaluated by the AHIC work groups, the Certification Commission for Health IT, the Health Information Technology Standards Panel, and other HHS partners. In October 2006, the committee recommended that HIPAA privacy protections be extended beyond the current definition of covered entities to include other entities that handle personal health information. It also called on HHS to create policies and procedures to accurately match patients with their health records and to require functionality that allows patient or physician privacy preferences to follow records regardless of location. The committee intends to continue to update and refine its recommendations as the architecture and requirements of the network advance. AHIC, a commission that provides input and recommendations to HHS on nationwide health IT, formed the Confidentiality, Privacy, and Security Workgroup in July 2006 to frame privacy and security policy issues and to solicit broad public input to identify viable options or processes to address these issues. The recommendations to be developed by this work group are intended to establish an initial policy framework and address issues including methods of patient identification, methods of authentication, mechanisms to ensure data integrity, methods for controlling access to personal health information, policies for breaches of personal health information confidentiality, guidelines and processes to determine appropriate secondary uses of data, and a scope of work for a long-term independent advisory body on privacy and security policies. The work group has defined two initial work areas—identity proofing and user authentication—as initial steps necessary to protect confidentiality and security. These two work areas address the security principle. Last month, the work group presented recommendations on performing patient identity proofing to AHIC. The work group intends to address other key privacy principles, including, but not limited to maintaining data integrity and control of access. It plans to address policies for breaches of confidentiality and guidelines and processes for determining appropriate secondary uses of health information, an aspect of the use and disclosure privacy principle. HHS has taken steps intended to address aspects of key privacy principles through its contracts and with advice and recommendations from its two key health IT advisory committees. For example, the privacy and security solutions contract is intended to address all the key privacy principles in HIPAA. Additionally, the uses and disclosures principle is to be further addressed through the advisory committees’ recommendations and guidance. The security principle is to be addressed through the definition of functional requirements for a nationwide health information network, the definition of security criteria for certifying electronic health record products, the identification of information exchange standards, and recommendations from the advisory committees regarding, among other things, methods to establish and confirm a person’s identity. The committees have also made recommendations for addressing authorization for uses and disclosure of health information and intend to develop guidelines for determining appropriate secondary uses of data. HHS has made some progress toward protecting personal health information through its various privacy-related initiatives. For example, during the past 2 years, HHS has defined initial criteria and procedures for certifying electronic health records, resulting in the certification of 35 IT vendor products. In January 2007, HHS contractors presented 4 initial prototypes of a Nationwide Health Information Network (NHIN). However, the other contracts have not yet produced final results. For example, the privacy and security solutions contractor has not yet reported its assessment of state and organizational policy variations. This report is due on March 31, 2007. Additionally, HHS has not accepted or agreed to implement the recommendations made in June 2006 by the NCVHS, and the AHIC Privacy, Security, and Confidentiality Workgroup is in the very early stages of efforts that are intended to result in privacy policies for nationwide health information exchange. HHS is in the early phases of identifying solutions for safeguarding personal health information exchanged through a nationwide health information network and has not yet defined an approach for integrating its various efforts or for fully addressing key privacy principles. For example, milestones for integrating the results of its various privacy-related initiatives and resolving differences and inconsistencies have not been defined, and it has not been determined which entity participating in HHS’s privacy-related activities is responsible for integrating these various initiatives and the extent to which their results will address key privacy principles. Until HHS defines an integration approach and milestones for completing these steps, its overall approach for ensuring the privacy and protection of personal health information exchanged throughout a nationwide network will remain unclear. The increased use of information technology to exchange electronic health information introduces challenges to protecting individuals’ personal health information. In our report, we identify and summarize key challenges described by health information exchange organizations: understanding and resolving legal and policy issues, particularly those resulting from varying state laws and policies; ensuring appropriate disclosures of the minimum amount of health information needed; ensuring individuals’ rights to request access to and amendments of health information to ensure it is correct; and implementing adequate security measures for protecting health information. Table 2 summarizes these challenges. Understanding and Resolving Legal and Policy Issues Health information exchange organizations bring together multiple and diverse health care providers, including physicians, pharmacies, hospitals, and clinics that may be subject to varying legal and policy requirements for protecting health information. As health information exchange expands across state lines, organizations are challenged with understanding and resolving data-sharing issues introduced by varying state privacy laws. HHS recognized that sharing health information among entities in states with varying laws introduces challenges and intends to identify variations in state laws that affect privacy and security practices through the privacy and security solutions contract that it awarded in 2005. Several organizations described issues associated with ensuring appropriate disclosure, such as determining the minimum data necessary that can be disclosed in order for requesters to accomplish the intended purposes for the use of the health information. For example, dieticians and health claims processors do not need access to complete health records, whereas treating physicians generally do. Organizations also described issues with obtaining individuals’ authorization and consent for uses and disclosures of personal health information and difficulties with determining the best way to allow individuals to participate in and consent to electronic health information exchange. In June 2006, NCVHS recommended to the Secretary of HHS that the department monitor the development of different approaches and continue an open, transparent, and public process to evaluate whether a national policy on this issue would be appropriate. Ensuring Individuals’ Rights to Request Access and Amendments to Health Information to Ensure It Is Correct As the exchange of personal health information expands to include multiple providers and as individuals’ health records include increasing amounts of information from many sources, keeping track of the origin of specific data and ensuring that incorrect information is corrected and removed from future health information exchange could become increasingly difficult. Additionally, as health information is amended, HIPAA rules require that covered entities make reasonable efforts to notify certain providers and other persons that previously received the individuals’ information. The challenges associated with meeting this requirement are expected to become more prevalent as the numbers of organizations exchanging health information increases. Implementing Adequate Security Measures for Protecting Health Information Adequate implementation of security measures is another challenge that health information exchange providers must overcome to ensure that health information is adequately protected as health information exchange expands. For example, user authentication will become more difficult when multiple organizations that employ different techniques exchange information. The AHIC Confidentiality, Privacy, and Security Workgroup recognized this difficulty and identified user authentication as one of its initial work areas for protecting confidentiality and security. To increase the likelihood that HHS will meet its strategic goal to protect personal health information, we recommend in our report that the Secretary of Health and Human Services define and implement an overall approach for protecting health information as part of the strategic plan called for by the President. This approach should: 1. Identify milestones and the entity responsible for integrating the outcomes of its privacy-related initiatives, including the results of its four health IT contracts and recommendations from the NCVHS and AHIC advisory committees. 2. Ensure that key privacy principles in HIPAA are fully addressed. 3. Address key challenges associated with legal and policy issues, disclosure of personal health information, individuals’ rights to request access and amendments to health information, and security measures for protecting health information within a nationwide exchange of health information. In commenting on a draft of our report, HHS disagreed with our recommendation and referred to “the department’s comprehensive and integrated approach for ensuring the privacy and security of health information within nationwide health information exchange.” However, an overall approach for integrating the department’s various privacy-related initiatives has not been fully defined and implemented. While progress has been made initiating these efforts, much work remains before they are completed and the outcomes of the various efforts are integrated. HHS specifically disagreed with the need to identify milestones and stated that tightly scripted milestones would impede HHS’s processes and preclude stakeholder dialogue on the direction of important policy matters. We disagree and believe that milestones are important for setting targets for implementation and for informing stakeholders of HHS’s plans and goals for protecting personal health information as part of its efforts to achieve nationwide implementation of health IT. HHS did not comment on the need to identify an entity responsible for the integration of the department’s privacy-related initiatives, nor did it provide information regarding an effort to assign responsibility for this important activity. HHS neither agreed nor disagreed that its approach should address privacy principles and challenges, but stated that the department plans to continue to work toward addressing privacy principles in HIPAA and that our report appropriately highlights efforts to address challenges encountered during electronic health information exchange. HHS stated that the department is committed to ensuring that health information is protected as part of its efforts to achieve nationwide health information exchange. In written comments, the Secretary of Veterans Affairs concurred with our findings, conclusions, and recommendation to the Secretary of HHS and commended our efforts to highlight methods for ensuring the privacy of electronic health information. The Department of Defense chose not to comment on a draft of the report. In summary, concerns about the protection of personal health information exchanged electronically within a nationwide health information network have increased as the use of health IT and the exchange of electronic health information have also increased. HHS and its Office of the National Coordinator for Health IT have initiated activities that, collectively, are intended to protect health information and address aspects of key privacy principles. While progress continues to be made through the various initiatives, it becomes increasingly important that HHS define a comprehensive approach and milestones for integrating its efforts, resolve differences and inconsistencies among them, fully address key privacy principles, ensure that recommendations from its advisory committees are effectively implemented, and sequence the implementation of key activities appropriately. HHS’s current initiatives are intended to address many of the challenges that organizations face as the exchange of electronic health information expands. However, without a clearly defined approach that establishes milestones for integrating efforts and fully addresses key privacy principles and the related challenges, it is likely that HHS’s goal to safeguard personal health information as part of its national strategy for health IT will not be met. Mr. Chairman, Senator Voinovich, and members of the subcommittee, this concludes our statement. We will be happy to answer any questions that you or members of the subcommittee may have at this time. If you have any questions on matters discussed in this testimony, please contact Linda Koontz at (202) 512-6240 or David Powner at (202) 512-9286, or by e-mail at koontzl@gao.gov or pownerd@gao.gov. Other key contributors to this testimony include Mirko J. Dolak, Amanda C. Gill, Nancy E. Glover, M. Saad Khan, David F. Plocher, Charles F. Roney, Sylvia L. Shanks, Sushmita L. Srikanth, Teresa F. Tucker, and Morgan F. Walts.
In April 2004, President Bush called for the Department of Health and Human Services (HHS) to develop and implement a strategic plan to guide the nationwide implementation of health IT. The plan is to recommend methods to ensure the privacy of electronic health information. GAO was asked to summarize its report that is being released today. The report describes the steps HHS is taking to ensure privacy protection as part of its national health IT strategy and identifies challenges associated with protecting electronic health information exchanged within a nationwide health information network. HHS and its Office of the National Coordinator for Health IT have initiated actions to identify solutions for protecting personal health information through several contracts and with two health information advisory committees. For example, in late 2005, HHS awarded several health IT contracts that include requirements for addressing the privacy of personal health information exchanged within a nationwide health information exchange network. Its privacy and security solutions contractor is to assess the organization-level privacy- and security-related policies, practices, laws, and regulations that affect interoperable health information exchange. Additionally, in June 2006, the National Committee on Vital and Health Statistics made recommendations to the Secretary of HHS on protecting the privacy of personal health information within a nationwide health information network and in August 2006, the American Health Information Community convened a work group to address privacy and security policy issues for nationwide health information exchange. While these activities are intended to address aspects of key principles for protecting the privacy of health information, HHS is in the early stages of its efforts and has therefore not yet defined an overall approach for integrating its various privacy-related initiatives and addressing key privacy principles, nor has it defined milestones for integrating the results of these activities. GAO identified key challenges associated with protecting electronic personal health information in four areas.
Since late 2003, the United States has employed numerous strategies to address the security and reconstruction needs of Iraq. First, the multinational force’s security transition strategy called for Iraqi security forces to assume security responsibilities on an accelerated basis during spring 2004. This strategy failed when Iraqi security forces performed poorly during an insurgent uprising. Second, a series of campaign plans and a strategy document attempted to integrate U.S. military and civilian efforts in Iraq but did not anticipate the escalation in violence during 2006. Third, to address the high levels of violence, the administration announced a new strategy, The New Way Forward. In October 2003, the multinational force outlined a four-phased plan for transferring security missions to Iraqi security forces. The four phases were (1) mutual support, where the multinational force established conditions for transferring security responsibilities to Iraqi forces; (2) transition to local control, where Iraqi forces in a local area assumed responsibility for security; (3) transition to regional control, where Iraqi forces were responsible for larger regions; and (4) transition to strategic overwatch, where Iraqi forces on a national level were capable of maintaining a secure environment against internal and external threats, with broad monitoring from the multinational force. The plan’s objective was to allow a gradual drawdown of coalition forces first in conjunction with the neutralization of Iraq’s insurgency and second with the development of Iraqi forces capable of securing their country. Citing the growing capability of Iraqi security forces, MNF-I attempted to shift responsibilities to them in February 2004 but did not succeed in this effort. In March 2004, Iraqi security forces numbered about 203,000, including about 76,000 police, 78,000 facilities protection officers, and about 38,000 in the civilian defense corps. Police and military units performed poorly during an escalation of insurgent attacks against the coalition in April 2004. According to a July 2004 executive branch report to Congress, many Iraqi security forces around the country collapsed during this uprising. Some Iraqi forces fought alongside coalition forces. Other units abandoned their posts and responsibilities and, in some cases, assisted the insurgency. A number of problems contributed to the collapse of Iraqi security forces, including problems in training, equipping, and vetting them. After the collapse of the Iraqi security forces in the spring of 2004, the Administration completed three key documents that outlined the evolving U.S. strategy for Iraq, none of which anticipated the level of sectarian violence that occurred after the Samarra mosque bombing in February 2006. First, during the summer of 2004, MNF-I completed a campaign plan that elaborated on and refined the original strategy for transferring security responsibilities to Iraqi forces at the local, regional, and national levels. Further details on this campaign plan are classified. Second, in November 2005, the National Security Council (NSC) issued the National Strategy for Victory in Iraq (NSVI) to clarify the President’s existing strategy for achieving U.S. political, security, and economic goals in Iraq. Third, in April 2006, MNF-I and the U.S. embassy in Baghdad issued the first joint campaign plan, which attempted to integrate U.S. political, military, and economic efforts in Iraq. Further details of this campaign plan are classified. In July 2006, we reported that the NSVI represented an incomplete strategy. The desirable characteristics of an effective national strategy are purpose, scope, and methodology; detailed discussion of problems, risks, and threats; the desired goal, objectives, activities, and outcome-related performance measures; description of future costs and resources needed; delineation of U.S. government roles, responsibilities, and coordination mechanisms; and a description of the strategy’s integration among and with other entities. On the one hand, the NSVI’s purpose and scope were clear because the strategy identified U.S. involvement in Iraq as a vital national interest and Iraq as a central front in the war on terror. The strategy also discussed the threats and risks facing the coalition forces and provided a comprehensive description of U.S. political, security, and economic goals and objectives in Iraq over the short term, medium term, and long term. However, the NSVI only partially identified the agencies responsible for implementing it, the current and future costs of U.S. involvement in Iraq, and Iraq’s contribution to its future needs. The strategy also did not anticipate that security conditions in Iraq would deteriorate as they did in 2006, as evidenced by the increased numbers of attacks and the Sunni-Shi’a sectarian strife that followed the February 2006 bombing of the Golden Mosque in Samarra. Enemy-initiated attacks against the coalition and its Iraqi partners increased through October 2006 and remained at high levels through the end of the year. During 2006, according to State and United Nations (UN) reports, insurgents, death squads, militias, and terrorists increased their attacks against civilians, largely on a sectarian basis. In addition, the number of internally displaced persons (IDP) in Iraq sharply increased, primarily as a result of sectarian intimidation and violence that forced many people from their homes. By the end of 2006, according to the UN, many Baghdad neighborhoods had become divided along Sunni and Shi’a lines and were increasingly controlled by armed groups claiming to act as protectors and defenders of these areas. According to the President, the violence in Iraq—particularly in Baghdad—overwhelmed the political gains the Iraqis had made. In response to the escalating violence, the President in January 2007 announced a new strategy—The New Way Forward—that established a new phase in U.S. operations for the near term of 12 to 18 months, or until July 2008. According to State and DOD officials, the Administration did not revise the NSVI strategy document when it announced The New Way Forward. Instead, four documents outline the goals and objectives of The New Way Forward: (1) NSC, Highlights of the Iraq Strategy Review, January 2007; (2) the President’s address to the nation, January 10, 2007; (3) Fact Sheet: New Way Forward in Iraq, January 10, 2007; (4) Office of the Press Secretary, White House, Background Briefing by Senior Administration Officials, January 10, 2007. According to the NSC document, the new strategy altered the administration’s assumptions regarding the security and political conditions in Iraq and how they would help or hinder the achievement of U.S. goals. For example, the Administration previously believed that the Iraqi elections in 2005 would lead to a national compact for democratic governance shared by all Iraqis and the continued training and equipping of Iraqi security forces would facilitate reductions in U.S. military forces. The New Way Forward acknowledged that national reconciliation might not take the form of a comprehensive national compact but could come from piecemeal efforts (see table 1). Similarly, The New Way Forward stated that while many Iraqi security forces were leading military operations, they were not yet ready to handle security challenges independently. The January 2007 strategy documents defined the original goals and objectives that the Administration believed were achievable by the end of this phase in July 2008. For example, the President pledged to increase the number of U.S. military forces in Iraq to help the Iraqis carry out their campaign to reduce sectarian violence and bring security to Baghdad and other areas of the country. The strategy also called for MNF-I to transfer security responsibilities to all 18 Iraqi provinces by the end of 2007. Further, the President committed to hold the Iraqi government to its pledges to (1) enact and implement key legislation to promote national reconciliation, (2) execute its capital budget, and (3) provide essential services to all Iraqi areas and communities and help Iraq maintain and expand its oil exports. The following section provides information on security conditions in Iraq from mid-2007 through May 2008, including factors affecting these conditions. Establishing a basic level of security is a key goal of The New Way Forward. Figure 1 shows that the overall levels of violence in Iraq—as measured by enemy-initiated attacks—decreased about 70 percent from June 2007 to February 2008, a significant reduction from the high levels of violence in 2006 and the first half of 2007. Similarly, as depicted in figure 2, the average daily number of enemy-initiated attacks declined from about 180 in June 2007 to about 60 in November 2007 and declined further to about 50 in February 2008. From 2003 through 2007, enemy-initiated attacks had increased around major political and religious events, such as Iraqi elections and Ramadan. In 2007, attacks did not increase during Ramadan. In a March 2008 report, DOD noted that reductions in violence across Iraq have enabled a return to normal life and growth in local economies. However, data for March 2008 show an increase in violence in Iraq. Security conditions deteriorated in March 2008, with the average number of attacks increasing from about 50 per day in February 2008 to about 70 attacks per day in March—about a 40 percent increase (see fig. 2). According to an April 2008 UN report, the increase in attacks resulted from Shi’a militias fighting Iraqi security forces throughout southern Iraq, as well as an increase in incidents of roadside bomb attacks against Iraqi security forces and MNF-I in Baghdad. The average number of attacks declined to about 65 per day in April and to about 45 per day in May. The enemy-initiated attacks counted in the Defense Intelligence Agency’s (DIA) reporting include car, suicide, and other bombs; ambushes; murders, executions, and assassinations; sniper fire; indirect fire (mortars or rockets); direct fire (small arms or rocket-propelled grenades); surface- to-air fire (such as man-portable air defense systems, or MANPADS); and other attacks on civilians. They do not include violent incidents that coalition or Iraqi security forces initiate, such as cordon and searches, raids, arrests, and caches cleared. According to DIA, the incidents captured in military reporting do not account for all violence throughout Iraq. For example, they may underreport incidents of Shi’a militias fighting each other and attacks against Iraqi security forces in southern Iraq and other areas with few or no coalition forces. DIA officials stated, however, that they represent a reliable and consistent source of information that can be used to identify trends in enemy activity and the overall security situation. According to DOD reports, the reduction in overall violence resulted primarily from steep declines in violence in Baghdad and Anbar provinces, though the violence in Baghdad increased in March 2008 (see fig. 3). These two provinces had accounted for just over half of all attacks in Iraq around the time the President announced The New Way Forward. As of February 2008, during one of the lowest periods for attacks in Iraq since the start of The New Way Forward, about one-third of all attacks in Iraq occurred in Baghdad and Anbar provinces. Despite improvements in the security situation, an April 2008 UN report found that violence has continued throughout Iraq and could rapidly escalate. According to the UN, toward the end of 2007, suicide bombings, car bombs, and other attacks continued with devastating consequences for civilians. While security improved in Baghdad and other locations, it deteriorated elsewhere, including in the city of Mosul in Ninewa province and in Diyala province. According to the UN report, religious and ethnic minorities and other vulnerable groups were victims of violent attacks. Armed groups also carried out assassinations of government or state officials, religious figures, professional groups, and law enforcement personnel. The violence in Iraq continues to result in the displacement of many Iraqis from their homes. In late March 2008, the Internally Displaced Persons (IDP) Working Group reported that the number of IDPs remained very high, but new displacement was occurring at a lower rate. The working group attributed the lower rate of displacement to, among other things, the increasing ethnic homogenization within Iraq; the decrease in security incidents in some areas of Baghdad; and restrictions on freedom of movement in many Iraqi provinces. During April 2008, according to UN and International Organization for Migration reports, hundreds of Iraqi families fled their homes in the Sadr City area of Baghdad, with the majority returning by early June 2008. The IDP Working Group estimated that over 2.77 million people were displaced inside Iraq, of which more than 1.5 million were displaced from 2006 through March 20, 2008. Further, the IDP Working Group estimated that 2 million additional Iraqis have left the country, including 1.2 million to 1.5 million who went to Syria and 450,000 to 500,000 who went to Jordan. The IDP Working Group also reported that as of March 20, 2008, large-scale return movements have not occurred. According to a May 2008 State Department report, more Iraqis were entering Syria in early 2008 than were returning to Iraq. State also reported that overall conditions for refugees in the region and Iraqis internally displaced continue to deteriorate. Moreover, the dangerous and volatile security conditions continue to hinder the movement and reconstruction efforts of international civilian personnel throughout Iraq. For example, according to a March 2008 DOD report, security concerns continue to discourage international investors and hinder private sector growth in most parts of the country. Due to the dangerous security conditions, State Department-led Provincial Reconstruction Teams continue to rely heavily on military assets for movement security and quick reaction force support, among other areas. Further, in April 2008, the UN reported that it has limited access throughout Iraq due to security constraints that hinder UN movement and daily activities. The United Nations also reported an increase in attacks against secure facilities that house and employ international diplomatic and military personnel. For example, from October 2007 through mid-March 2008, the indirect fire attacks aimed at the International Zone were less than a dozen. However, during the last week of March, the International Zone received 47 separate indirect fire barrages consisting of 149 rounds of 122-millimeter and 107-millimeter rockets and at least three larger 240-millimeter rockets, one of which hit the UN compound. In addition, according to the UN report, the incidence of indirect fire attacks on Basra air station, the British military base that also houses U.S. and other international civilian personnel, rose steadily during the first 3 months of 2008, with 48 attacks from January to March. The New Way Forward has the goal of defeating al Qaeda in Iraq (AQI) and its supporters and ensuring that no terrorist safe haven exists in Iraq. According to MNF-I, DOD, and State reports, rejection of al Qaeda in Iraq by significant portions of the population and operations to disrupt AQI networks have helped decrease violence in Iraq; however, AQI is not defeated and maintains the ability to carry out high-profile attacks. According to MNF-I’s Commanding General, the loss of local Sunni support for AQI had substantially reduced the group’s capability, numbers, areas of operation, and freedom of movement. DOD reported in March 2008 that AQI lost strength and influence in Anbar province, Baghdad, the belts around Baghdad, and many areas of Diyala province. The report notes, however, that AQI remains highly lethal and maintains a significant presence in parts of the Tigris River Valley, Ninewa province, and other areas of Iraq. According to an MNF-I report, AQI is now predominately based in northern Iraq, especially in Mosul, where frequent high-profile attacks continue. DOD, State, and UN reports attribute the reductions in violence in Iraq to three key actions: (1) the increase in U.S. combat forces, (2) the establishment of nongovernmental Iraqi security forces, and (3) the cease- fire declaration of the Mahdi Army leader. In announcing The New Way Forward in January 2007, the President cited two primary reasons for ordering an increase in U.S. forces in Iraq. First, the President acknowledged that earlier efforts to provide security in Baghdad had failed, in part, due to an insufficient number of U.S. and Iraqi troops to secure neighborhoods cleared of terrorists and insurgents. He therefore called for an increase of over 20,000 U.S. combat and other forces, including an additional 5 brigades. The vast majority of these troops would help Iraqis clear and secure neighborhoods and protect the local population. Second, to support local tribal leaders who had begun to show a willingness to take on AQI, the President ordered the deployment of 4,000 U.S. troops to Anbar province. Figure 4 shows the increase of U.S. forces in Iraq from about 132,000 in December 2006 to about 169,000 in August 2007, an overall increase of about 37,000 troops—almost 30 percent above the December 2006 force level. In September 2007, the President announced that the United States would withdraw the surge forces by July 2008—the end of The New Way Forward—resulting in a decline in U.S. brigade combat teams from 20 to 15 and a projected force level of about 140,000 U.S. troops. The MNF-I Commanding General reported in April 2008 that he would need 45 days after the surge brigades leave Iraq to consolidate his forces and assess how the reduced U.S. military presence will affect conditions on the ground. After that time, he would assess whether U.S. forces could be further reduced. According to DOD reporting, the additional surge forces allowed MNF-I to increase its operational tempo and change tactics in providing security to the Iraqi people. Specifically, the additional troops enabled MNF-I to maintain a continuous presence in Baghdad and surrounding areas by establishing about 60 joint security stations with Iraqi forces and combat outposts outside of its large operating bases as of August 2007 (see fig. 5). In May 2008, the former commander of the Multinational Corps-Iraq reported that the number of joint security stations and combat outposts had since increased to 75. In March 2008, DOD reported that these security stations and outposts had a stabilizing effect along ethnic fault lines, complemented MNF-I’s efforts to reconcile former insurgents, and helped maintain pressure on domestic and external insurgent elements. Over time, according to the DOD report, MNF-I will transfer the joint security stations and combat outposts to Iraqi forces as it draws down and moves to a support role. According to DOD and MNF-I reports, the establishment of local nongovernmental security forces that oppose AQI has helped decrease the levels of violence in parts of Iraq, most notably in Anbar province, but these groups by and large have not yet reconciled with the Iraqi government. The groups, including those now known as the Sons of Iraq, began forming in Anbar province in late 2006, with the movement spreading to other areas of Iraq during 2007 and 2008. As Sons of Iraq, these former insurgents take an oath to be law-abiding citizens and work with MNF-I and, in some cases, the Iraqi government to protect their local communities. Most work on MNF-I contracts. Overall, according to an April 2008 MNF-I report, the various Sons of Iraq groups consisted of about 105,000 members. Sons of Iraq groups do not have a national or regional structure, as local groups are generally organized along sectarian lines based on the neighborhoods in which they operate. In March 2008, DOD reported that the Sons of Iraq program has helped to improve security at the local level by involving local citizens in the security of their communities. According to the DOD report, the Sons of Iraq are a key component of the counterinsurgency fight due to their knowledge of the local populace and their ability to report activities that might otherwise escape the attention of MNF-I and Iraqi forces. These groups also provide security for roads, municipal buildings, power lines, and other key facilities in their local communities under the direction of MNF-I or Iraqi forces, thereby allowing MNF-I and Iraqi forces to pursue and engage the enemy. While the Sons of Iraq are playing an important role at the local level to quell violence, DOD reported that they also pose some challenges for the Iraqi government and the coalition. These challenges include the potential for infiltration by insurgents, the possible distortions in the local economy if salaries are not carefully managed, and the lack of a cohesive Iraqi plan to transition the Sons of Iraq to the Iraqi forces or civilian employment. According to DOD reporting, the Iraqi government continues to debate the future of the Sons of Iraq, raising concerns over infiltration by irreconcilable elements, the merits of supporting or employing a large number of former insurgents, and the methods for transitioning Sons of Iraq members into the Iraqi forces, private sector employment, or educational programs. Further, according to the April 2008 UN report, despite their relative success and growing numbers, during early 2008 some tribal security forces temporarily withdrew their support of MNF-I and the Iraqi security forces in Diyala and Babil provinces. Fraying relations between these groups and the Iraqi government in Anbar province caused a spike in violence in this area. As of March 2008, DOD reported that about 20,000 Sons of Iraq had already transitioned to the Iraqi security forces or civil employment. According to DOD and UN reports, the cease-fire declared in August 2007 by the leader of the Mahdi Army, an extremist Shi’a militia, contributed significantly to the decline in violence in the second half of 2007. However, the cease-fire appears tenuous as the militia recently increased attacks against other Shi’a militias, the coalition, and Iraqi security forces before declaring another cease-fire on May 11. The Mahdi Army and its affiliated special groups remain the largest and most dangerous Shi’a militia in Iraq, according to an MNF-I report, with a combined nationwide strength of approximately 25,000 to 40,000 active members supported by a large body of non-active supporters. According to DOD and UN reports, the cease-fire showed signs of fraying in late 2007, as tensions increased in southern Iraq among the various Shi’a militia factions. These tensions led the various Shi’a militia factions to begin routinely launching attacks against each other’s interests and periodically engaging in open conflict lasting several days, or even weeks, before Iraqi security forces and MNF-I intervened. In February 2008, according to the UN report, there were numerous public demonstrations against the political and security leadership in Basra. Despite the reaffirmation of the Mahdi Army ceasefire in February, the Iraqi government launched an offensive against criminal and militia elements in Basra in late March 2008, which sparked widespread fighting in Baghdad, Basra, and other southern cities. According to a UN report, violence declined in Basra in April as the Iraqi government and various armed groups reached agreement to stop fighting, but violence continued in Sadr City, a Mahdi Army-controlled area of 2.5 million people. Moreover, the Iraqi security forces have conducted operations targeting the Mahdi Army in Nassiriyah, al-Amarah, al-Kut, and Hillah, thus escalating the level of violence in these cities. Najaf and Karbala also suffered explosive attacks in the last week of March, which, according to the UN, are rare occurrences in these two cities. On May 20, 2008, the International Organization for Migration reported that the security situation had improved somewhat in Sadr City due to a truce between the Mahdi Army and government forces on May 11. This section discusses the strength and capabilities of Iraqi security forces and efforts to transfer security responsibilities to the Iraqi government. The New Way Forward set the goal of developing capable Iraqi security forces and transferring security responsibilities to the government of Iraq. Since 2003, the United States has provided more than $20 billion to develop Iraqi security forces. The Iraqi security forces comprise Ministry of Defense and Ministry of Interior forces that vary in size. Overall, the number of Iraqi military and police personnel has increased from about 142,000 in March 2005 to about 445,000 in April 2008. The number of Iraqi security forces is almost three times that of the 162,300 U.S. forces in Iraq as of April 2008. The Iraqi total includes about 203,000 under the Iraqi Ministry of Defense and about 238,000 under the Ministry of Interior. Table 2 provides the force levels for the major components of the Iraq security forces in March 2005, January 2007, and April 2008. In commenting on a draft of this report, DOD stated that the number of trained and equipped Iraqi security forces had grown to about 478,000 as of May 2008. Ministry of Defense forces consist of 12 Iraqi army divisions and a small air force and navy. These forces have grown by more than 230 percent since March 2005. Iraqi Ministry of Interior forces consist of Iraqi police— which, as of April 2008, represent about 70 percent of personnel within the Ministry of Interior—and other units, specifically, the national police (formerly the special police), Department of Border Enforcement, and Center for Dignitary Protection. Iraqi police precincts are under the operational control of their local municipality and the corresponding provincial government. Ministry of Interior forces have grown by more than 200 percent since March 2005. Future projections show that the Iraqi security forces will continue to grow. DOD reported that Iraqi security forces—military, police, and special operations forces—could reach 646,000 by 2010 (see figure 6). Specifically, the Ministry of Interior is projected to grow to about 389,000 employees in the Iraqi police service, national police, and Directorate of Border Enforcement. Ministry of Defense forces will include 13 army divisions (12 infantry, 1 armored) along with supporting forces, 1,500 navy personnel, 4,000 air force personnel, and 5,750 counterterrorism forces. The number of trained Iraqi security forces may overstate the number of troops present for duty. According to DOD, the number of trained troops includes personnel who are deceased or absent without leave. For example, DOD reported that approximately 24,500 soldiers were dropped from the Iraqi Army rolls in 2007 because they deserted or were absent without leave. However, these troops are still counted in trained numbers. An April 2008 Special Inspector General for Iraqi Reconstruction report confirmed that a substantial number of Iraqi personnel still on the payroll were not present for duty for various reasons, such as being on leave, absent without leave, injured, or killed. In September 2007, GAO assessed the Iraqi government’s progress in increasing the number of Iraqi security forces’ units capable of operating independently. This was a benchmark established by the U.S. Congress and derived from benchmarks and commitments articulated by the Iraqi government beginning in June 2006. The number of independent Iraqi security forces as measured by Operational Readiness Assessments (ORA) level 1 continues to be an important measure of the capabilities of Iraqi security forces. Although Iraqi security forces have grown in number and many are leading counterinsurgency operations, MNF-I assessments of their readiness levels show limited improvements. MNF-I uses ORA to determine when Iraqi units can assume the lead for security operations. The ORA is a classified joint assessment prepared monthly by the unit’s coalition and Iraqi commanders. For the Iraqi army, commanders use the ORA process to assess a unit’s personnel, command and control, equipment, sustainment and logistics, and training and leadership capabilities. ORA level 1 is a unit capable of planning, executing, and sustaining counterinsurgency operations; level 2 is capable of planning, executing, and sustaining counterinsurgency operations with Iraqi security force or coalition force assistance; level 3 is partially capable of planning, executing, and sustaining counterinsurgency operations with coalition force assistance; level 4 is forming and/or incapable of conducting counterinsurgency operations. In April 2008, the Commanding General of MNF-I reported that more Iraqi security force battalions were leading security operations in Iraq. He stated that MNF-I handed over the lead security responsibility to 19 additional Iraqi army battalions between January 2007 and March 2008, as displayed in figure 7. While 65 percent of the Iraqi units were in the lead in counterinsurgency operations as of March 2008, the number of Iraqi army battalions rated at the highest readiness level accounts for less than 10 percent of the total number of Iraqi army battalions. While the number of battalions “in the lead”—that is, leading counterinsurgency operations with or without coalition support—increased from 93 in January 2007 to 112 in March 2008, MNF-I is now including some units at ORA level 3 as in the lead, which are assessed as partially capable of conducting counterinsurgency operations. In contrast, the January 2007 report did not include ORA Level 3 units as in the lead. GAO is completing work assessing the capabilities of the Iraqi security forces at each ORA level. According to DOD, the Iraqi national police battalions, organized under the Ministry of Interior, generally have been less capable and have shown less progress than Iraqi army battalions. While the number of Iraqi national police battalions increased from 27 in January 2007 to 36 in March 2008, no units achieved ORA level 1, and about 11 units were at ORA level 2. The United States faces several challenges in enhancing the capabilities of Iraq’s security forces: (1) the lack of a single unified force; (2) sectarian and militia influences; (3) continued dependence upon U.S. and coalition forces for logistics and combat support; and (4) training and leadership shortages. First, Iraqi security forces are not a single unified force with a primary mission of countering the insurgency in Iraq. Only one major component of the Iraqi security forces, the Iraqi army, has counterinsurgency as its primary mission. The Iraqi army represents about 45 percent of 445,000 trained Iraqi security forces. The Iraqi local police represent 37 percent of total trained security forces and have civilian law enforcement as a primary mission. The Iraqi national police account for 10 percent of total trained Iraqi forces. According to the Independent Commission on the Security Forces of Iraq, the national police are not a viable organization, as they face significant challenges, including public distrust, real and perceived sectarianism, and uncertainty as to whether it is a military or police force. The commission recommended that the national police be disbanded and reorganized under the Ministry of Interior. As a smaller organization with a different name, it would be responsible for specialized police tasks such as explosive ordnance disposal, urban search and rescue, and other functions. Second, sectarian and militia influences have divided the loyalties of the Iraqi security forces. In May 2007, the U.S. Commission on International Religious Freedom reported that Iraq’s Shi’a-dominated government has engaged in sectarian-based human rights violations and has tolerated abuses committed by Shi’a militias with ties to political factions in the governing coalition. According to the commission, the Iraqi government, through its security forces, has committed arbitrary arrest, prolonged detention without due process, targeted executions, and torture against non-Shi’a Iraqis. In September 2007, we determined that the Iraqi government had not eliminated militia control over local security forces and that sectarianism in the Iraqi security forces was a serious problem in Baghdad and other areas of Iraq. According to DOD, in March 2008, sectarianism and corruption continue to be significant problems within the Ministries of Interior and Defense. For example, some army units sent to Baghdad have had ties to Shi’a militias, making it difficult to target Shi’a extremist networks. According to the March 2008 State Department Human Rights Report, the effectiveness of Ministry of Interior forces, particularly the national police, was seriously compromised by militia influence. Third, as we reported in November 2007, Iraqi units remain dependent upon the coalition for their logistical, command and control, and intelligence capabilities. The Ministries of Defense and Interior were not capable of accounting for, supporting, or fully controlling their forces in the field, nor do the Iraqi security forces have critical enablers such as intelligence and logistics systems and processes that permit independent planning and operations. Due to Iraq’s immature logistics systems, many Iraqi military and police units will continue to depend on MNF-I for key sustainment and logistics support through 2008. Further, the Independent Commission on the Security Forces of Iraq stated that the Iraqi Army remains heavily dependent on contracted support to satisfy day-to-day requirements, and it appears that contracted logistics support in some form will be necessary for 2 to 3 years. Fourth, shortfalls in training, leadership, personnel, and sustainment have contributed to the limited progress in the number of Iraqi battalions capable of operating independently, according to DOD reports. To address this problem, the Iraqi government has expanded its training capacity. According to DOD’s March 2008 report, the Ministry of Interior has expanded the number of its training facilities from 4 to 17 over the past year and is implementing its first annual strategic plan. In addition, the Iraqi army plans to develop training centers in 2008 that will train an additional 2,000 soldiers per cycle. However, DOD noted that Ministry of Interior and Defense basic combat and police training facilities are at or near capacity and that the shortage of leaders in the Iraqi security forces will take years to address. Furthermore, the influx of about 20,000 of the 105,000 Sons of Iraq who are currently working with coalition forces will place an additional strain on the capacity of the Iraqis to train their forces, particularly the police. The ability of a province to transfer from MNF-I to provincial Iraqi control is dependent on security and governance in each province. Due to increased levels of violence and the lack of capable Iraqi security forces, the projected transition dates for the completion of the provincial Iraqi control process have shifted over time. In June 2005, Iraq’s Prime Minister announced a joint decision between the government of Iraq and MNF-I to systematically hand over security responsibility in Iraq’s 18 provinces under the control of the province’s governor. The Joint Committee to Transfer Security Responsibility was commissioned in July 2005 to develop a set of conditions assessing the readiness of each province for Iraqi control. Four conditions are used to determine whether a province should be transferred to provincial Iraqi control. These conditions include (1) the threat level of the province, (2) Iraqi security forces’ capabilities, (3) the governor’s ability to oversee security operations, and (4) MNF-I’s ability to provide reinforcement if necessary. According to MNF-I, as these conditions are met, MNF-I forces will then leave all urban areas and assume a supporting role to Iraq’s security forces. In January 2007, The New Way Forward stated that the Iraqi government would take responsibility for security in all 18 provinces by November 2007. However, this date was not met, as only 8 of 18 provinces had transitioned to Iraqi control at that time. According to DOD, in September 2007, the principal cause for the delay in transitioning provinces to Iraqi control was the inability of the Iraqi police to maintain security in the provinces. For example, as a result of the February 2007 Baghdad Security Plan, an increased number of terrorists, insurgents, and members of illegal militia fled Baghdad for other provinces, and the Iraqi police were unable to handle these threats. As of May 2008, nine provincial governments have lead responsibility for security in their province. Six of the nine provinces that have assumed security responsibilities are located in southern Iraq, where the British forces had the lead and have continued to draw down their forces. The remaining three provinces are located in northern Iraq, in the area controlled by the Kurdistan Regional Government. Figure 8 displays the degree to which the provinces had achieved provincial Iraqi control as of May 2008. According to the MNF-I Commanding General, eight of the nine remaining provinces are expected to transition to provincial Iraqi control by early 2009. One of the provinces (Ta’mim) has no expected transition date. Figure 9 shows the projected timelines for transferring security responsibilities to the remaining provincial governments. According to the MNF-I Commanding General, the coalition continues to provide assistance even after security responsibilities have transferred to provincial Iraqi control. For example, the coalition continues to support Iraqi-led operations in those provinces with planning, logistics, close air support, intelligence, and embedded transition teams. This section describes progress toward the U.S. goal of helping Iraq enact key legislation that would promote national reconciliation. To promote national reconciliation and unify the country, the Iraqi government, with U.S. support, committed in 2006 to address political grievances among Iraq’s Shi’a, Sunni, and Kurd populations. The U.S. and Iraqi governments believed that fostering reconciliation through political compromise and the passage of legislation, such as reintegrating former Ba’athists and sharing hydrocarbon resources equitably, were essential. In 2007, in The New Way Forward, the U.S. government identified legislation that the Iraqi government committed to enact by December 31, 2007. The United States also promoted Iraq’s reconciliation by assisting the country in its constitutional referendum and legislative elections and building the capacity of Iraq’s legislature. Since September 2007, the Iraqi government has enacted three laws that could address some Sunni concerns—de-Ba’athification reform, amnesty for certain detainees in Iraq’s justice system, and provincial powers. These three laws were enacted after considerable debate and compromise and, according to State and DOD reports, represented positive signs of political progress. De-Ba’athification and amnesty laws are steps to address Sunni and Sadrist concerns that they had been removed from government service or detained and arrested. According to the U.S. ambassador to Iraq, the number of Iraqis currently held in detention is a significant problem. The provincial powers law established a date for new provincial elections, which could address Sunni underrepresentation in several provincial governments. However, three additional laws considered critical for national reconciliation have not been enacted. These include laws that set the rules for Iraq’s provincial elections, define the control and management of Iraq’s oil and gas resources, and provide for disarmament and demobilization of Iraq’s armed groups. According to U.S. reports, the oil law and law on disarmament and demobilization are stalled. According to U.S. and other officials and documents, although the process is evolving, enacting legislation generally includes the following steps: The Presidency Council and the Council of Ministers have authority to draft laws, and the Iraqi legislature—either a committee or 10 members—has the authority to propose laws. Laws drafted by the Presidency Council or Council of Ministers are reviewed for legal soundness and subject matter by the Shura Council, an institution in the Ministry of Justice. Laws drafted by the legislature must first pass through its Legal Committee. The legislation then proceeds through three readings. The legislation is presented at the first reading. The relevant committee may amend the law, and the Speaker’s Office places it on the calendar. After the first reading, the legislature discusses the proposed law at a second reading. At the third reading, a final vote is taken article by article. Laws that receive an affirmative vote are sent to the Presidency Council, which can disapprove the law. The legislature can override the disapproval with a three-fifths majority. This ratification process only applies during the transition period when the Presidency Council is in existence. Final laws are published in the Official Gazette and become effective on the date of publication in the Gazette unless stipulated otherwise. Figure 10 shows the law enacted since September 2007, identifies the steps left to enact the remaining legislation, and indicates the status of implementation, which will be discussed in the next section. Since we last reported on legislation to promote national reconciliation in September 2007, the Iraqi government has passed the following laws. As of September 2007, drafts of de-Ba’athification reform legislation were under initial review by the Council of Representatives. After extensive debate, the Iraqi legislature passed the de-Ba’athification reform law on January 12, 2008. The Presidency Council approved the law in February 2008 and it was published in the Official Gazette. According to a March 2008 DOD report, if implemented in the spirit of reconciliation, this law could allow some former Ba’athist party members, many of whom were Sunni, to return to government. The new law establishes a national commission to complete the removal of former high-level officials of the Ba’athist party, consistent with measures outlined in the law. The law, however, allows some lower-ranking members of the Ba’athist party to return to or continue working for the government. In May 2003, Coalition Provisional Authority (CPA) Order 1 provided for investigation and removal of even junior members of the party from government, universities, and hospitals. As of September 2007, the Iraqi government had not drafted an amnesty law. After considerable negotiation among the political blocs, the legislation was combined with other pieces of legislation and passed as part of an overall package in February 2008. According to a March 2008 DOD report, the law represents an important step toward addressing a long-standing demand for detainee releases, but the ultimate effect on national reconciliation will depend on its implementation. The law provides for amnesty and release of Iraqis sentenced to prison and those under investigation or trial, provided they are not involved in certain crimes such as kidnapping, murder, embezzling state funds, smuggling antiquities, or terrorism that results in killing or permanently disabling victims. The law also requires the Iraqi government to undertake the necessary measures to transfer those detained in the MNF-I facilities to Iraqi facilities so that the provisions of this law can be applied to them. This law is important to Sunnis and Sadrists, according to State and USIP officials, as many were detained or held without trial. As of September 2007, the Iraqi legislature had completed the second reading of a draft of the provincial powers legislation. In February 2008, after considerable negotiation, the Iraqi government passed the provincial powers legislation as part of an overall legislative package and after an initial veto by the Shi’a vice president of the Presidency Council was withdrawn. According to a March 2008 DOD report, the law is an important step toward establishing a balance between adequate central government authority and strong local governments, some of which represent provinces with large or majority Sunni populations. The law outlines the specific powers of the provinces and provides the structure of government for the provincial and local councils. The law also sets the date for provincial council elections as no later than October 1, 2008. Other key legislation has not passed, including the provincial elections law, hydrocarbon laws, and disarmament and demobilization. As of September 2007, a provincial elections law had not been drafted. Since then, the Prime Minister’s Office has drafted a provincial elections law and presented it to the Iraqi legislature, where it has completed its second reading. As of May 2008, the Iraqi legislature is debating its provisions. This draft law would provide the rules for holding provincial elections, which are critical to promote national reconciliation. According to a DOD report, new elections would enhance reconciliation by enabling the creation of provincial councils that are more representative of the populations they serve. Many Sunnis did not vote in the 2005 provincial elections, resulting in underrepresentation of Sunnis in some provincial councils. In Baghdad, for example, the population is about 40 percent Sunni, but the council has 1 Sunni representative out of 51, according to a March 2008 State report. As of September 2007, the Iraqi government had drafted three of the four separate but interrelated pieces of legislation needed to establish control and management of Iraq’s hydrocarbon resources and ensure equitable distribution of revenues. Since that time, only the hydrocarbon framework draft, which establishes the control and management of the oil sector, has progressed to the Council of Representatives. The three additional laws include legislation to establish revenue sharing, restructure the Ministry of Oil, and establish the Iraqi National Oil Company. According to State officials, the Kurdistan Regional Government (KRG) and the federal government disagree on many areas of the proposed legislation, particularly on the issue of how much control the KRG will have in managing its oil resources. For example, the KRG has passed its own oil and gas law. Furthermore, the KRG has negotiated an estimated 25 contracts with foreign oil firms, which the Iraqi federal government claims are illegal. As of September 2007, the Iraqi legislature had not drafted legislation on disarmament and demobilization of militias and armed groups. Since then, no progress has been made on drafting legislation. According to the United Nations, minimum requirements for a successful disarmament and demobilization program in Iraq include a secure environment, the inclusion of all belligerent parties, an overarching political agreement, sustainable funding, and appropriate reintegration opportunities. As of May 2008, these conditions were not present. For example, the United Nations reported that since March 27, 2008, intense fighting in Sadr City has occurred among militias linked to Muqtada Al Sadr and the Iraqi security forces and MNF-I. According to the Iraqi government, between late March 2008 and the end of April 2008, 925 persons were killed and 2,600 persons injured during the military operation. Although Iraq has enacted some legislation it judged important for national reconciliation, implementation of the legislation and its outcomes are uncertain. For example, the amnesty legislation is currently being implemented as detainees have been approved for release, but a limited number have been set free as of May 2008. Moreover, implementation of the de-Ba’athification law has stalled, and holding free and fair provincial elections poses logistical and security challenges. Implementation of the amnesty law began on March 2, 2008. According to the Iraq Higher Juridical Council, as of May 1, 2008, almost 17,000 prisoners and detainees have been approved for release. According to State officials, the law is implemented at the provincial level by committees of provincial judges. These committees are more likely to implement the law, according to State officials, because several are located in provinces with large Sunni populations where many detainees are located. However, according to the U.S. Embassy in Iraq, the process of releasing prisoners and detainees is slow, and, according to State, approximately 1,600 have been released to date. The legislation does not provide a time frame for the approximately 25,000 MNF-I detainees to be turned over to Iraqi custody. Although the de-Ba’athification law was enacted in February 2008, implementation of the law has stalled, delaying the possible reinstatement of an estimated 30,000 former government employees. The Iraqi government has yet to appoint members of the Supreme National Commission on Accountability and Justice, which has primary responsibility for implementing the law. According to State officials, Sunnis are concerned about the law’s implementation and the choice of commissioners. The Iraqi government faces challenges in holding provincial elections by October 2008, as required by the provincial powers law. According to State officials, a provincial election law has not been enacted and the draft law contains confusing and contentious issues. For example, the draft law states that any political entity that possesses an armed militia is prohibited from participating in the election. According to State, this provision could eliminate some political parties, such as the Sadrist Trend. According to a UN report and U.S. Agency for International Development (USAID) officials, there are challenges for the Iraqi government to hold these elections by late 2008. UN and IFES reports estimate that it would take about 8 months to prepare for the elections, and State estimates that elections could probably be held 4-5 months after an elections law is passed. Although some elections preparations have begun, numerous tasks remain and some cannot begin until the election rules are set by law. According to USAID and IFES, the tasks remaining included establishing voter registration lists; making voting provisions for internally displaced persons; registering candidates for the councils, including vetting them through the de-Ba’athification process; designing and printing ballots; identifying polling sites; and providing time for the candidates to campaign in their districts. According to U.S. officials, holding provincial elections will face security challenges due to likely sectarian violence, insurgent attacks, and political party militias. Elections in several areas may be fiercely contested as militias and sectarian groups may fight for control of the provincial councils and their financial resources, according to State and USAID officials. State and USAID officials said MNF-I is working with the Iraqi government to help provide support for the election. Iraq’s Constitution was approved in a national referendum in October 2005, but did not resolve several contentious issues, including the powers of the presidency, claims over disputed areas such as oil-rich Kirkuk, and the relative powers of the regions versus the federal government. According to State officials, these unresolved issues were core points of dispute among Iraq’s Shi’a, Sunni, and Kurd political blocs. According to the United Nations, Iraqi leaders included a compromise provision in the draft constitution that required the formation of the Constitutional Review Committee (CRC) to review the Constitution and propose necessary amendments. Since September 2007, the constitutional review process has made little progress. The CRC recommended a draft package of amendments to the Council of Representatives in May 2007, but these have not moved forward. Since then, the CRC has received multiple extensions to complete its work, but has not proposed a new package of amendments. According to a March 2008 DOD report, Kurdish leaders have prevented progress in the review process until the issue of disputed territories, especially Kirkuk, is settled. The following summarizes three key issues in the Constitution that have not been resolved. Power of the presidency. The Deputy Chairman of the CRC, a member of the Sunni bloc, believes that the Presidency Council should have greater power in relation to the prime minister to allow for better power sharing among Iraq’s political groups. According to the Iraqi Constitution, in the current electoral term, a presidency council consisting of a president and 2 vice-presidents exercises the powers of the presidency. The Presidency Council—currently a Shi’a, a Sunni, and a Kurd—can approve or disapprove legislation in the current electoral term. However, the legislature can adopt disapproved legislation by a three-fifths majority vote. On the other hand, the prime minister, selected from the legislature’s largest political bloc and currently a Shi’a, is commander-in-chief of the armed forces, names the ministers for each ministry, and directs the Council of Ministers, which directs the work of all government ministries and departments, develops their plans, and prepares the government budget. Disputed areas, particularly Kirkuk. Kurdistan Regional Government officials want a referendum to be held in Kirkuk to determine its status. Even though the deadline for holding the referendum was December 31, 2007, the KRG and the Iraqi government agreed to a 6-month extension on implementation. While KRG officials wanted a referendum to be held as soon as practical, other Iraqi legislators believe that a referendum should be deferred due to border disputes and displacement of people in the area. The United Nations is currently consulting with various groups about the status of other disputed territories, such as the districts of Akre and Makhmour currently in Ninewa province. According to the UN, there is no agreed upon listing of disputed areas and their boundaries. If these discussions succeed, it could be a model for determining the status of Kirkuk, according to the UN. Power of the federal government versus regions. Shi’a, Sunni, and Kurdish political blocs disagree over fundamental questions of federalism—relative power among the federal, regional, and provincial governments. The CRC proposed several amendments to better define and clarify the relative powers but has not achieved compromise among major political factions. The Kurdish bloc rejected the proposed changes, stating it would decrease regional power while concentrating power in the federal government. This section discusses Iraq’s progress toward spending its capital budget and U.S. efforts to improve Iraqi budget execution. The New Way Forward emphasized the need to build capacity in Iraq’s ministries and help the government execute its capital investment budgets. This U.S. goal is particularly important as current U.S. expenditures on Iraq reconstruction projects are nearing completion. However, Iraq continues to spend small percentages of its capital investment budgets needed to improve economic growth. Iraq’s inability to spend its considerable resources limits the government’s efforts to further economic development, advance reconstruction projects, and, at the most basic level, deliver essential services to the Iraqi people. In recognition of this critical need, U.S. capacity development efforts have shifted from long- term institution-building projects to an immediate effort to help Iraqi ministries overcome their inability to spend their capital investment budgets. As U.S. funding for Iraq reconstruction totaling $45 billion is almost 90 percent obligated ($40 billion) and about 70 percent disbursed ($31 billion) as of April 2008, the need for Iraq to spend its own resources becomes increasingly critical to economic development. Between 2005 and 2007, Iraq budgeted about $27 billion in capital investments for its own reconstruction effort, as shown in table 3. However, the government spent about 24 percent of the amount budgeted. According to Ministry of Finance total expenditure reports displayed in figure 11, Iraq has spent low percentages of capital investment budgets between 2005 and 2007 in several key categories. Total government spending for capital investments increased slightly from 23 percent in 2005 to 28 percent in 2007. However, Iraq’s central ministries spent only 11 percent of their capital investment budgets in 2007—a decline from similarly low spending rates of 14 and 13 percent in 2005 and 2006, respectively. Last, spending rates for ministries critical to the delivery of essential services varied from the 41 percent spent by the Water Resources Ministry in 2007 to the less than 1 percent spent by the Ministries of Oil and Electricity. As discussed in the next section, low spending rates for the oil, electricity, and water sectors are problematic since U.S. investments in these sectors have ended and increased production goals for these sectors have consistently not been met. Iraq will have additional resources for capital investments in 2008. Iraq’s 2008 budget was developed with the assumption that Iraq would receive $57 per barrel for oil exports. As of May 2008, Iraqi crude oil was selling at about $104 per barrel. Oil exports generate about 90 percent of total government revenues each year. GAO will issue a separate report on Iraq’s estimated unspent and projected oil revenues for 2003 through 2008. In March 2008, DOD reported that preliminary Iraqi budget execution data for the period January to October 2007 show that the government spent 45 percent of its capital budget, and central ministries executed 47 percent of their capital budgets. Further, in commenting on a draft of this report, the Treasury Department stated that the Iraqi government spent and committed about 63 percent of its investment budget in 2007, as documented in special reports developed by the Ministry of Finance. The special reports include Iraqi commitments to spend as well as actual expenditures. “Commitments” is defined under Iraq’s Financial Management Law, as “an undertaking to make an expenditure following the conclusion of a binding agreement that will result in payment.” We did not use the special reports for our analyses for two reasons: (1) Treasury Department officials stated in our meetings with them that the special reports contain unreliable data, and (2) the special reports do not define commitments, measure them, or describe how or when these commitments would result in actual expenditures. In addition, our reviews of these special reports show inconsistent use of poorly defined budget terms, as well as columns and rows that do not add up. In addition, we note that the Iraqi government operates on a cash basis in which expenditures are reported when paid. Commitments, such as signed contracts, would normally not be included in expenditures until paid. Given the security and capacity challenges currently facing Iraq, many committed contracts may not be executed and would not result in actual expenditures, according to U.S. agency officials. U.S. government, coalition, and international agencies have identified a number of factors that challenge the Iraqi government’s efforts to fully spend its budget for capital projects. These challenges include violence and sectarian strife, a shortage of trained staff, and weak procurement and budgeting systems. First, U.S., coalition, and international officials have noted that violence and sectarian strife remain major obstacles to developing Iraqi government capacity, including its ability to execute budgets for capital projects. The high level of violence has contributed to a decrease in the number of workers available and can increase the amount of time needed to plan and complete capital projects. The security situation also hinders U.S. advisors’ ability to provide the ministries with assistance and monitor capital project performance. Second, U.S., coalition, and international agency officials have observed the relative shortage of trained budgetary, procurement, and other staff with technical skills as a factor limiting the Iraqi government’s ability to plan and execute its capital spending. The security situation and the de- Ba’athification process have adversely affected available government and contractor staffing. Officials report a shortage of trained staff with budgetary experience to prepare and execute budgets and a shortage of staff with procurement expertise to solicit, award, and oversee capital projects. According to State and other U.S. government reports and officials, there has been decay for years in core functions of Iraqi’s government capacity, including both financial and human resource management. Finally, weak procurement, budgetary, and accounting systems are of particular concern in Iraq because these systems must balance efficient execution of capital projects while protecting against reported widespread corruption. A World Bank report notes that corruption undermines the Iraqi government’s ability to make effective use of current reconstruction assistance. According to a State Department document, widespread corruption undermines efforts to develop the government’s capacity by robbing it of needed resources; by eroding popular faith in democratic institutions, perceived as run by corrupt political elites; and by spurring capital flight and reducing economic growth. In early 2007, U.S. agencies increased the focus of their assistance efforts on improving the Iraqi government’s ability to effectively execute its budget for capital projects, although it is not clear what impact this increased focus has had, given the relatively low rates of spending. The new U.S. initiatives included greater coordination between the U.S. embassy and an Iraqi task force on budget execution, and the provision of subject matter experts to help the government track expenditures and provide technical assistance with procurement. According to U.S. officials, these targeted efforts also reflect an increased interest of senior Iraqi officials in improving capital budget spending. In addition, improving Iraqi government budget execution is part of a broader U.S. assistance effort to improve the capacity of the Iraqi government through automation of the financial management system, training, and advisors embedded with ministries. As we reported in October 2007, the development of competent and loyal Iraqi ministries is critical to stabilizing and rebuilding Iraq. In 2005 and 2006, the United States provided funding of about $169 million for programs to help build the capacity of key civilian ministries and the Ministries of Defense and Interior. As part of The New Way Forward, the Administration sought an additional $395 million for these efforts in fiscal years 2007 and 2008. Ministry capacity development refers to efforts and programs to advise and help Iraqi government employees develop the skills to plan programs, execute their budgets, and effectively deliver government services such as electricity, water, and security. We found multiple U.S. agencies leading individual efforts and recommended that Congress consider conditioning future appropriations on the completion of an integrated strategy for U.S. capacity development efforts. In commenting on a draft of this report, the State Department reiterated prior comments that it already had an integrated plan for building capacity in Iraq’s ministries. In addition, State and Treasury cited a new Public Financial Management Action Group they were forming to help integrate and coordinate U.S. government assistance on improving budget execution. Adding a new program to the uncoordinated and multiple U.S. capacity development programs we found does little to address GAO’s recommendation for an integrated strategy. The government of Iraq also has made recent efforts to address impediments to budget execution. For example, State reported in May 2008 that the Council of Ministers recently approved new regulations to lift the ceiling on the amounts ministerial contracting committees can approve. Committees in the ministries of Defense, Interior, Oil, Trade, Health, Electricity, Industry and Minerals, Water Resources, and Municipalities can now approve contracts up to $50 million. This represents a $30 million increase for Defense, Oil, Electricity and Trade and a $10 million increase for the other ministries. A newly formed Central Contracts Committee will approve contracts exceeding the $50 million limit. This section discusses the extent to which key U.S. goals for oil, electricity, and water production have been met. Providing essential services to all Iraqi areas and communities and helping Iraq maintain and expand its oil export are key goals of The New Way Forward. The oil sector is critical to Iraq’s economy, accounting for over half of Iraq’s gross domestic product and about 90 percent of its revenues. Iraq’s crude oil reserves, estimated at a total of 115 billion barrels, are the third largest in the world. After 5 years of effort and $2.7 billion in U.S. reconstruction funds, Iraqi crude oil output has improved for short periods but has consistently fallen below the U.S. goals of reaching an average crude oil production capacity of 3 million barrels per day and export levels of 2.2 mbpd (see figure 12). In May 2008, crude oil production was 2.5 million barrels per day and exports were 1.96 million barrels per day, according to the State Department. Poor security, corruption and smuggling continue to impede the reconstruction of Iraq’s oil sector. For example, according to State Department officials and reports, as of 2006, about 10 to 30 percent of refined fuels was being diverted to the black market or smuggled out of Iraq and sold for a profit. According to DOD, investment in Iraq’s oil sector is below the absolute minimum required to sustain current production and additional foreign and private investment is needed. U.S. officials and industry experts have stated that Iraq would need an estimated $20 billion to $30 billion over the next several years to reach and sustain a crude oil production capacity of 5 mbpd. This production goal is below the level identified in the 2005-2007 National Development Strategy—at least 6 mbpd by 2015. Since 2003, the United States has provided $4.7 billion to the reconstruction of Iraq’s electricity sector. Despite this substantial investment, electricity generation did not consistently achieve past U.S. goals and demand continues to outpace supply from Iraq’s national grid (see fig. 13). For example, a recent State Department report shows that for June 3 to 9, the daily supply of electricity from the grid met only 52 percent of demand. In addition, average hours of electricity were 7.8 hours in Baghdad and 10.2 hours nationwide, compared to the U.S. 2006 goal of 12 hours of daily electricity and the Iraqi Ministry of Electricity goal of 24 hours. State Department's technical comments on a draft of this report stated that it is well-documented that in parts of Iraq, and even in parts of Baghdad, on a given day there are upwards of 16 hours of power a day; and in some locations there is 24 hours of power. We analyzed data from State's weekly status reports for the period January 3, 2008 to June 4, 2008 and found that number of hours of electricity in Baghdad ranged from 6.5 to 12 and averaged about 8 hours per day. For other parts of Iraq, hours of electricity ranged from 8.2 to 14.3 with an average 10.2 hours per day. According to DOD, the electricity sector suffers from several problems, including fuel shortages, interdictions, damage to power lines, reliance on foreign sources of power, and prior years of neglect. Between 2004 and 2006, the United States reported electricity generation goals that ranged from 110,000 megawatt hours (mwh) to 127,000 mwh. However, since 2007 the United States has stopped setting metric goals for the electricity sector. According to both the U.S. Embassy’s 2007 Electrical Action Plan and the 2008 Transition Plan, the U.S. goal is to “provide electricity in a reliable and efficient manner to as many Iraqi citizens as possible, and for as many hours as possible.” According to a State Department official, the United States no longer sets metric goals for the entire electricity sector because U.S. projects only constitute a portion of the electricity sector. Moreover, the senior electricity advisor stated that there are too many variables that may affect any projections. The Ministry of Electricity estimated in its 2006-2015 plan that the government will need $27 billion over 6 to 10 years to reach its goal of providing reliable electricity across Iraq by 2015. The ministry’s goal is to achieve 24 hours of power nationwide and meet demand plus 10 percent. As we reported in May 2007, a variety of security, corruption, legal, planning, and sustainment challenges have impeded U.S. and Iraqi efforts to restore Iraq’s oil and electricity sectors. These challenges have made it difficult to achieve the current crude oil production and export goals that are central to Iraq’s government revenues and economic development. In the electricity sector, these challenges have made it difficult to achieve a reliable Iraqi electrical grid that provides power to all other infrastructure sectors and promotes economic activity. Although the oil and electricity sectors are mutually dependent, the Iraqi government lacks integrated planning for these sectors leading to inefficiencies that could hinder future rebuilding efforts. Specifically, the Iraqi government lacks an integrated energy plan that clearly identifies future costs and resource needs; rebuilding goals, objectives, and priorities; stakeholder roles and responsibilities, including steps to ensure coordination of ministerial and donor efforts; an assessment of the environmental risks and threats; and performance measures and milestones to monitor and gauge progress. For example, the lack of cooperation and coordination between the Oil and Electricity ministries, particularly in supplying appropriate fuels to the electricity sector, has resulted in inefficiencies such as increased maintenance costs and frequent interruptions in electricity production, according to U.S. officials. We recommended that the Secretary of State, in conjunction with relevant U.S. agencies and in coordination with the donor community, work with the Iraqi government to develop an integrated energy strategy for the oil and electricity sectors that identifies, among other items, key goals and priorities, future funding needs, and steps for enhancing ministerial coordination. In a May 2008 letter, the MNF-I Commanding General asked the Iraqi Prime Minister to establish a ministerial-level oversight committee to develop an Iraqi National Energy Strategy. In commenting on a draft of this report, the State Department indicated that it was encouraging the Iraqi government to develop an integrated energy strategy. Unsafe drinking water can carry diseases such as cholera, typhoid, and dysentery. Since April 2006, U.S. reconstruction projects have focused on producing enough clean water to reach up to an additional 8.5 million Iraqis. As of March 2008, U.S.-funded projects had the capacity to provide an additional 8 million Iraqis with potable water. The World Bank has estimated that $14.4 billion is needed to rebuild the public works and water system in Iraq; the U.S. government has allocated about $2.4 billion for improvements in the water and sanitation sector. According to the UN Office for the Coordination of Humanitarian Affairs, insecurity, population displacement, and a lack of maintenance are placing pressure on existing water and sanitation facilities, leaving a large number of Iraqis either without water or with access to water that puts them increasingly at risk of water borne diseases. According to the United Nations Children’s Fund (UNICEF), only one in three Iraqi children under the age of 5 has access to safe drinking water, and only 17 percent of Iraq’s sewage is treated before being discharged into the country’s rivers and waterways. A UNICEF 2006 survey that measured the reliability of water supplies indicated widespread infrastructure problems. For example, although 79 percent of Iraqis reported having access to an improved drinking water source, this figure does not reflect the condition and reliability of services. Nearly half of those with access to water sources reported problems with their water service, with 21 percent of this population reporting problems on a daily basis. In addition, only 43 percent of rural residents reported having access to an improved drinking water source. Monitoring progress toward increasing Iraqis’ access to clean water is complicated by several factors. As we reported in 2005 and recently confirmed with the State Department, Iraq has no metering for water usage and no measurement of the quality of the potable water supply. Moreover, State lacks comprehensive and reliable data on the capacity of water treatment and sewage facilities that have not been constructed or rehabilitated by U.S.-funded projects. Finally, as we reported in 2005 and as noted in recent U.S. government and UN reports, not all facilities may be operating as intended due to looting, unreliable electricity, inadequate supplies, or the lack of trained personnel. According to State and DOD officials, as of late May 2008, the Administration has not revised its prior Iraq strategy document (NSVI) to include U.S. goals and objectives for The New Way Forward, which ends in July 2008, or the phase that follows. Instead, according to State and DOD officials, future U.S. goals and objectives in Iraq are contained in the following documents: the President’s September 13, 2007, address on “the way forward” in Iraq; the President’s April 10, 2008, address on Iraq; Fact Sheet: The Way Forward in Iraq, April 10, 2008; and the testimony of the Secretary of Defense, April 10, 2008. These documents clearly state the importance the Administration places on continued U.S. involvement in and support for Iraq. They also discuss the ongoing drawdown of U.S. troops in Iraq that will end in July 2008 and generally describe the U.S. military transition that would occur in Iraq over an unspecified period of time in the future. The Secretary of Defense’s testimony defined the desired U.S. end state for Iraq as (1) a unified, democratic, and federal Iraq that can govern, defend, and sustain itself; (2) an Iraq that is an ally against Jihadist terrorism and a net contributor to security in the gulf; and (3) an Iraq that helps bridge the sectarian divides in the Middle East. The documents, however, do not specify the administration’s strategic goals and objectives in Iraq for the phase after July 2008 or how it intends to achieve them. Further, while they predict continued progress in the security, political, and economic areas, they do not address the remaining challenges to achieving either unmet U.S. goals and objectives or the desired U.S. end state for Iraq. A clear statement about the U.S. military transition and remaining challenges is important, as the UN mandate for the multinational force in Iraq, under Security Resolution 1790, expires December 31, 2008. This resolution reaffirmed MNF-I’s authority to take all necessary measures to maintain security and stability in Iraq. The United States and Iraq are negotiating a status of forces agreement to provide the United States and its coalition partners with the authorities necessary to conduct operations to support the Iraqi government after the UN mandate ends. In May 2008, the State Department reported that the MNF-I/U.S. Embassy Joint Campaign Plan provides a road map for the future. This campaign plan is classified. To reflect changing U.S. goals and conditions in Iraq, MNF-I and the U.S. embassy in Baghdad revised their Joint Campaign Plan in July 2007. At the President’s direction, they updated it in November 2007 to reflect the decision to withdraw the surge forces by July 2008—the end of The New Way Forward. According to the May 2008 State Department report, the Joint Campaign Plan supports the implementation of U.S. efforts in Iraq along four lines of operation: political, security, economic, and diplomatic. The plan recognizes the importance of enhancing security and protecting the Iraqi population and of advancing the political line of operation to help Iraqis establish legitimate, representative governance in their country at both the national and provincial levels. However, a campaign plan is an operational, not a strategic plan, according DOD’s doctrine for joint operation planning. A campaign plan must rely on strategic guidance from national authorities for its development. For example, the April 2006 MNF-I/U.S. embassy Baghdad Joint Campaign Plan relied on the NSC’s prior strategic plan, the National Strategy for Victory in Iraq, as a basis for the plan’s development. Activities at the strategic level include establishing national and multinational military objectives, as well as defining limits and assessing risks for the use of military and other instruments of national power. In contrast, a campaign plan is developed at the operational level. Activities at this level link tactics and strategy by establishing operational objectives needed to achieve strategic objectives, sequencing events to achieve the operational objectives, initiating actions, and applying resources to bring about and sustain these events. The development of a campaign plan, according to doctrine, should be based on suitable and feasible national strategic objectives formulated by the President, the Secretary of Defense, and the Chairman of the Joint Chiefs of Staff—with appropriate consultation with additional NSC members, other U.S. government agencies, and multinational partners. Doctrine states that in developing operational plans, commanders and their staffs must be continuously aware of the higher-level objectives. According to DOD doctrine, if operational objectives are not linked to strategic objectives, tactical considerations can begin to drive the overall strategy at cross-purposes. Joint doctrine also states that effective planning cannot occur without a clear understanding of the end state and the conditions that must exist to end military operations and draw down forces. According to doctrine, a campaign plan should provide an estimate of the time and forces required to reach the conditions for mission success or termination. Our review of the classified Joint Campaign Plan, however, identified limitations in these areas, which are discussed in a classified GAO report accompanying this report. Weaknesses in “the way forward” and the Joint Campaign Plan are symptomatic of recurring weaknesses in past U.S. strategic planning efforts. Our prior reports assessing (1) the National Strategy for Victory in Iraq, (2) U.S. efforts to develop the capacity of Iraq’s ministries, and (3) U.S. and Iraqi efforts to rebuild Iraq’s energy sector found strategies that lacked clear purpose, scope, roles and responsibilities, and performance measures. For example, we found that the NSVI only partially identified the agencies responsible for implementing the strategy, the current and future costs, and Iraq’s contributions to future needs. Although multiple U.S. agencies have programs to develop the capacity of Iraqi ministries, U.S. efforts lack an integrated strategy. Finally, although the United States has spent billions of dollars to rebuild Iraq’s oil and electricity sectors, Iraq lacks an integrated strategic plan for the energy sector. We recommended that the National Security Council, DOD, and State complete a strategic plan for Iraq and that State work with the Iraqi government to develop integrated strategic plans for ministry capacity development and the energy sector. Clear strategies are needed to guide U.S. efforts, manage risk, and identify needed resources. Since 2003, the United States has developed and revised multiple strategies to address security and reconstruction needs in Iraq. The current strategy—The New Way Forward—responds to failures in prior plans that prematurely transferred security responsibilities to Iraqi forces or belatedly responded to growing sectarian violence. The United States has made some progress in achieving key goals stated in The New Way Forward, but progress is fragile and unmet goals and challenges remain: Violence has declined from the high levels of 2006 and early 2007, largely the result of an increase in U.S. combat forces, the creation of nongovernmental security forces, and the Mahdi Army’s cease fire. However, the security environment remains volatile and dangerous. The number of trained and equipped Iraqi security forces is approaching one-half million. However, the number of Iraqi units capable of performing operations without U.S. assistance has remained about 10 percent. Efforts to turn security responsibilities over to Iraqi forces remain a continuing challenge. The Iraqi government has passed key legislation to return some Ba’athists to government, give amnesty to detained Iraqis, and define provincial powers. However, it has not enacted other important legislation for sharing oil resources or holding provincial elections, and its efforts to complete a constitutional review have stalled. Finally, Iraq has not followed through on commitments to spend more money on its own reconstruction efforts. Low spending rates for the critical oil, electricity, and water sectors are problematic since U.S. investments have ended and increased production goals for these sectors have not been met. As The New Way Forward and the military surge end in July 2008, and given weaknesses in current DOD and State plans, an updated strategy is needed for how the United States will help Iraq achieve key security, legislative, and economic goals. Accordingly, we recommend that DOD and State, in conjunction with relevant U.S. agencies, develop an updated strategy for Iraq that defines U.S. goals and objectives after July 2008 and addresses the long-term goal of achieving an Iraq that can govern, defend, and sustain itself. This strategy should build on recent security and legislative gains, address the remaining unmet goals and challenges for the near and long term, clearly articulate goals, objectives, roles and responsibilities, and the resources needed and address prior GAO recommendations. We provided a draft of this report to the Departments of State, Treasury and Defense for their comments. Their comments are provided in Appendices III through V. The agencies also provided technical comments that we have incorporated in the report, where appropriate. The State Department disagreed with our recommendation to develop an updated strategic plan stating that while the military surge ends, the strategic goals of The New Way Forward remain largely unchanged. Similarly, DOD did not concur with our recommendation stating that The New Way Forward strategy remains valid. However, the departments stated they shall review and refine the strategy as necessary. In addition, DOD stated that the MNFI-U.S. Embassy Joint Campaign Plan is a comprehensive, government wide plan that guides the effort to achieve an Iraq that can govern, defend and sustain itself. We reaffirm the need for an updated strategy for several reasons. First, much has changed in Iraq since January 2007, including some of the assumptions upon which the New Way Forward was based. Specifically: Violence in Iraq is down but U.S. surge forces are leaving and over 100,000 armed Sons of Iraq remain. Late 2007 target dates for the government of Iraq to pass key legislation and assume control over local security have passed. The United States is currently negotiating a status of forces agreement with Iraq to replace UN Security Council Resolutions. The Secretary of Defense recently articulated a new long term goal for Iraq—an Iraq that helps bridge sectarian divides in the Middle East. Second, The New Way Forward is an incomplete strategic plan because it articulates goals and objectives for only the near-term phase that ends in July 2008. Third, the goals and objectives of The New Way Forward and the phase that follows it are contained in disparate documents such as Presidential speeches, White House fact sheets, and an NSC power point presentation, rather than in a strategic planning document similar to the National Strategy for Victory in Iraq, the prior U.S. strategy for Iraq. Fourth, the limited documents that describe the phase after July 2008 do not specify the administration’s long term strategic goals and objectives in Iraq or how to achieve them. Furthermore, the classified Joint Campaign Plan is not a strategic plan; it is an operational plan with significant limitations that we discuss in a separate, classified report that accompanies this report. The Treasury Department stated that the our draft report dismissed the significance of the increase in Iraq’s budgetary “commitments”, stating that GAO’s analyses relied only on Iraqi Ministry of Finance’s total expenditure reports rather than the Ministry’s special capital reports. The latter report includes budgetary “commitments.” Iraq has stated that it has spent and committed about 63 percent of its investment budget. We did not use the special reports in our analyses for two reasons: (1) Treasury Department officials stated that the special reports contained unreliable data, and (2) the reports do not define commitments, measure them or describe how or when these commitments would result in actual expenditures. In addition, our reviews of these special reports show inconsistent use of poorly defined budgetary terms, as well as columns and rows that did not add up. We are sending copies of this report to interested congressional committees. We will also make copies available to others on request. In addition, this report is available on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Joseph A. Christoff, Director, International Affairs and Trade, at (202) 512- 8979 or christoffj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. In this report, we discuss progress in meeting key U.S. goals outlined in The New Way Forward, specifically, (1) improving security conditions; (2) developing Iraqi security forces’ capabilities and transferring security responsibilities to the Iraqi government; (3) facilitating Iraqi government efforts to draft, enact, and implement key legislative initiatives; (4) assisting Iraqi government efforts to spend budgets; and (5) helping the Iraqi government provide key essential services to its people. The New Way Forward established goals to achieve over 12 to 18 months, or by July 2008. To complete this work, we reviewed U.S. agency documents or interviewed officials from the Departments of Defense, State, and the Treasury; the Multi-national Force-Iraq (MNF-I) and its subordinate commands; the Defense Intelligence Agency; the National Intelligence Council; and the United Nations. We also reviewed translated copies of Iraqi government documents. In support of this work, we extensively utilized information collected by GAO staff assigned to the U.S. embassy in Baghdad from January through March 2008. We provided drafts of the report to the relevant U.S. agencies for review and comment. We received formal written comments from the Departments of State, the Treasury, and Defense, which are included in appendixes III, IV, and V, respectively. We conducted this performance audit from March through June 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To provide information on the evolution of the U.S. strategy for Iraq, we relied extensively on prior GAO reports and updated information on the current strategy. To identify the U.S. strategy documents for The New Way Forward and the phase that followed it, we obtained information from State and DOD officials. These officials informed us that the administration did not revise the National Strategy for Victory in Iraq strategy document when it changed its Iraq strategy in January 2007. A number of documents outline the goals and objectives of The New Way Forward: (1) National Security Council, Highlights of the Iraq Strategy Review, January 2007; (2) the President’s address to the nation, January 10, 2007; (3) Fact Sheet: New Way Forward in Iraq, January 10, 2007; (4) Office of the Press Secretary, White House, Background Briefing by Senior Administration Officials, January 10, 2007; and (5) the July and November 2007 MNF-I/U.S. Embassy Baghdad Joint Campaign Plans. For the goals and objectives of the phase that follows The New Way Forward, State and DOD officials directed us to (1) the President’s speeches on Iraq on September 13, 2007, and April 10, 2008; (2) a White House Fact Sheet on the Way Forward, April 10, 2008; and (3) testimonies of the Secretary of Defense, the Commanding General of MNF-I; and the U.S. Ambassador to Iraq. To determine the progress made in improving security in Iraq, we relied extensively on a number of prior GAO reports. Where appropriate, we updated data on security trends. To update these data, we obtained and assessed MNF-I data on enemy-initiated attacks against the coalition and its Iraqi partners from the Defense Intelligence Agency (DIA). We determined that the data were sufficiently reliable for establishing general trends in the number of enemy-initiated attacks in Iraq. To determine the reliability of the data, we reviewed MNF-I’s attacks reporting guidance, compared the unclassified data to classified sources, and discussed how the data are collected, analyzed, and reported with DIA officials. We also collected data on the three main factors that contributed to the security improvements (1) U.S. combat forces; (2) nongovernmental Iraqi security forces, such as the Sons of Iraq; and (3) the declared cease-fire by the Mahdi Army. To determine the reliability of the U.S. combat forces data, we compared the unclassified U.S. troop numbers to classified sources, and discussed how the data are collected and reported with Department of Defense (DOD) officials. In addition, we reviewed MNF-I, DOD, and United Nations (UN) documents on nongovernmental Iraqi security forces and the declared cease-fire of the Mahdi Army leader. We also interviewed officials from State, DOD, including DIA and the Joint Staff, in Washington, D.C., and Baghdad, Iraq. To determine if progress has been made in improving the capabilities of Iraq’s security forces and transferring security to the government of Iraq, we relied on a number of prior GAO reports and, where appropriate, we updated data. To update data on the results of U.S. efforts to develop Iraqi security forces, we reviewed DOD and MNF-I documents showing the capabilities and size of the Iraqi army and police units. For example, we analyzed MNF-I’s Operational Readiness Assessments (ORA), formerly known as Transitional Readiness Assessments, for Iraqi army units. To update information on factors affecting the development of Iraqi security forces, we reviewed DOD, State, and UN reports, as well as a report of an independent commission and MNF-I guidance on Iraqi readiness assessments. We relied on DOD and State reports for the number of trained Iraqi security forces. We recognize limitations to these reported data, but determined that they are sufficiently reliable to show a general trend in the growth of Iraqi security forces. We reviewed DOD and State documents showing planned and actual transfer of provinces to provincial Iraqi control. We interviewed officials from DOD, DIA, State, and the National Intelligence Council. To determine progress made on actions related to Iraq’s constitutional review and enacting and implementing key legislation, we used prior GAO reporting and updated information where appropriate. In updating the information, we reviewed reports and documentation from the UN, U.S. Institute for Peace, non-governmental organizations, United States Agency for International Development (USAID) and the Departments of Defense and State in Washington, D.C., and Baghdad, Iraq. We reviewed draft laws and enacted legislation, as well as analyses of the laws. We spoke to officials from the UN, State, Defense, USAID, the U.S. Institute of Peace, and Iraqi officials. To assess the extent to which the government of Iraq is assisting Iraqi government experts to execute budgets, we relied extensively on a prior GAO report and updated the information where necessary. We interviewed officials from the U.S. Department of the Treasury, DOD, and State in Washington, D.C., as well as consultants under contract with the United Kingdom’s Department of International Development. To assess progress in allocating and spending Iraqi revenues we reviewed Iraqi Ministry of Finance capital budget and expenditure data for fiscal years 2006 and 2007 provided by the Treasury, and unofficial Ministry of Planning and Development Cooperation data on capital expenditures reported by MNF- I. To examine the data the U.S. Embassy in Baghdad uses to measure Iraqi government spending, we obtained expenditure data from Treasury and the U.S. embassy in Baghdad and interviewed knowledgeable U.S. agency officials. We did not independently verify the precision of the data on Iraq’s budget execution. However, the disparity among the different sets of data calls into question their reliability and whether they can be used to draw firm conclusions about the extent to which the Iraqi government has increased its spending on capital projects in 2007, compared with 2006. We also reviewed U.S. embassy reports on Iraqi budget execution, Iraqi government instructions for executing the budget, Iraq’s Financial Management Law, the Special Inspector General for Iraq Reconstruction’s (SIGIR) Quarterly and Semiannual Report to the Congress, and the Administration’s July and September 2007 Benchmark Assessment Reports. To assess the extent to which the Iraqi government is providing key essential services to the Iraqi people, we relied extensively on prior GAO reports and updated the information where necessary. To do so, we interviewed officials and reviewed documents from DOD and State. We also reviewed prior GAO, U.S. agency inspector general, SIGIR, and other audit agency reports. On the basis of this analysis, we found the data sufficiently reliable for identifying production goals in both sectors and whether actual production is meeting these goals. In September 2007, as required by the U.S. Troop Readiness, Veterans’ Care, Katrina Recovery, and Iraq Accountability Appropriations Act of 2007, GAO provided Congress an independent assessment of whether the government of Iraq had met 18 benchmarks contained in the act, and the status of the achievement of the benchmarks. While our current report covers almost all of the issues included in our September 2007 report, our reporting objectives are derived from the key goals outlined in The New Way Forward in Iraq. In many of the areas, our current reporting objectives enabled us to provide a broader context and updated analysis that expand on information included in the benchmarks report. This report discusses progress in meeting key U.S. goals outlined in The New Way Forward, specifically, (1) improving security conditions; (2) developing Iraqi security forces’ capabilities and transferring security responsibilities to the Iraqi government; (3) facilitating Iraqi government efforts to draft, enact, and implement key legislative initiatives; (4) assisting Iraqi government efforts to spend budgets; and (5) helping the Iraqi government provide key essential services to its people. We did not assess issues described in benchmarks (viii) and (xvi) because we previously assessed those benchmarks to have been met. We did not assess benchmark (iv) because while the semi-autonomous regions law has been enacted, implementation does not occur until one or more provinces attempt to form a region. Table 4 provides a crosswalk between our current reporting objectives and the 18 benchmarks. The following are GAO’s comments on the Department of State letter dated June 16, 2008. 1. State disagreed with our recommendation to develop an updated strategic plan, stating that while the military surge ends, the strategic goals of The New Way Forward remain largely unchanged. State noted that Iraq continues to face many challenges in the near term and there are still unmet goals. While State said it would review and refine the strategy as needed, it commented that “we do not require a new strategic document.” We disagree. Much has changed in Iraq since January 2007, including some of the assumptions upon which The New Way Forward was based. Violence in Iraq is down but U.S. surge forces are leaving and over 100,000 armed Sons of Iraq remain. Late 2007 target dates for the government of Iraq to pass key legislation and assume control over local security have passed. The United States is currently negotiating a status of forces agreement with Iraq to replace UN Security Council Resolutions. The Secretary of Defense recently articulated a new long term goal for Iraq—an Iraq that helps bridge sectarian divides in the Middle East. An updated U.S. strategy must reflect these changes by assessing the progress made over the past 18 months, targeting the unmet goals of the New Way Forward and articulating our long-term strategic objectives for Iraq. 2. It is unclear if State is implementing GAO’s prior recommendations on building capacity in Iraq’s ministries. In our October 2007 report, we recommended that the State Department develop an integrated plan for U.S. capacity development programs in Iraq. The Embassy stated that it is in the process of implementing a previous GAO recommendation that will enhance U.S. capacity development in Iraq. In contrast, State department contends that our recommendation is not needed because such a plan already exists. An integrated plan is still needed and becomes even more important as State and Treasury announce another new capacity development program – the Public Finance Management Action Group – to help Iraq with budget execution issues. 3. We are encouraged that State is working with the Iraqi government to develop the integrated national energy strategy we called for in our May 2007 report: “Rebuilding Iraq: Integrated Strategic Plan Needed to Help Restore Iraq’s Oil and Electricity Sectors”, GAO-07-677. The following are GAO’s comments on the Department of Treasury letter dated June 12, 2008. 1. The government of Iraq allocated $10 billion of its revenues for capital projects and reconstruction when it passed its 2007 budget in February 2007. We focused on Iraq’s efforts to spend its capital budget because it is a key benchmark that the government committed to achieve by the end of 2007. The New Way Forward identified Iraq’s inability to fully spend its own resources to rebuild its infrastructure and deliver essential services as a critical economic challenge to Iraq’s self- reliance. 2. Treasury states that Iraq has improved its overall budget execution in 2007, citing as an example an overall increase in Iraq’s budget from $23 billion in 2006 to $26.6 billion in 2007, an increase of 16 percent. However, the Ministry of Finance reports expenditures in Iraqi dinar, not US dollars. When analyzed in dinars, Iraq’s budget decreased 3 percent from 34.5 trillion dinars in 2006 to 33.5 trillion dinars in 2007. The 16 percent increase that Treasury reported is due to the 19 percent appreciation of Iraqi dinar in 2007. 3. We agree that Iraq’s budget doubled in size between 2005 and 2008 in dollar terms. However, much of the increase was due to a 25 percent appreciation of the Iraqi dinar and a four fold increase in the budgets of Iraq’s security ministries. 4. Treasury states that the our draft report dismisses the significance of the increase in Iraq’s budgetary “commitments”, stating that GAO’s analyses rely only on Iraqi Ministry of Finance’s total expenditure report rather than the Ministry’s special capital reports. The latter report includes budgetary “commitments”. We did not use the special reports in our analyses for two reasons: (1) Treasury Department officials stated in our meetings with them that the special reports contain unreliable and unverifiable data and (2) the special reports do not define commitments, measure them or describe how or when these commitments would result in actual expenditures. In addition, our reviews of these special reports show inconsistent use of poorly defined budgetary terms, as well as columns and rows that did not add up. 5. Treasury stated that Iraq counts capital expenditures in the grants section of its expenditure reports, as well as the non-financial assets section. After reviewing the grants section, we have updated the data presented in table 3 to include an additional $1.1 billion in budget and expenditures for 2007. Accordingly, the percent of the budget spent in 2007 was 28 percent. 6. We added information on the Iraqi government’s report that it spent and committed about 63 percent of its investment budget. 7. We have added additional information on the Public Financial Management Action Group that Treasury is forming to improve Iraqi budget execution across Iraqi ministries and provinces. The following are GAO’s comments on the Department of Defense letter dated June 17, 2008. 1. DOD recognized, as with all strategies, updates and refinements occur at varying intervals to take into account changes in the strategic environment. However, DOD did not concur with our recommendation, stating that The New Way Forward strategy remains valid. We disagree for several reasons. First, much has changed in Iraq since January 2007, including some of the assumptions upon which The New Way Forward was based. Specifically: Violence in Iraq is down but U.S. surge forces are leaving and over 100,000 armed Sons of Iraq remain. Late 2007 target dates for the government of Iraq to pass key legislation and assume control over local security have passed. The United States is currently negotiating a status of forces agreement with Iraq to replace UN Security Council Resolutions. The Secretary of Defense recently articulated a new long term goal for Iraq—an Iraq that helps bridge sectarian divides in the Middle East. Second, The New Way Forward is not a complete strategic plan because it lays out goals and objectives for only the near-term phase that ends in July 2008. Third, the goals and objectives of The New Way Forward and the phase that follows it are contained in disparate documents such as Presidential speeches, White House fact sheets, and an NSC power point presentation, rather than in a strategic planning document similar to the National Strategy for Victory in Iraq (NSVI), the prior U.S. strategy for Iraq. Fourth, the documents that describe the phase after July 2008 do not specify the administration’s long term strategic goals and objectives in Iraq or how it intends to achieve them. In contrast, while the NSVI was also an incomplete strategy, it contained a comprehensive description of U.S. political, security, and economic goals and objectives in Iraq over the short term, medium term, and long term. We continue to believe that the Administration should update its strategy for Iraq, given the importance of the war effort to U.S. national security interests, the expenditure of billions of dollars for U.S. military and civilian efforts in Iraq, and the continued deployment of at least 140,000 troops in Iraq. An updated U.S. strategy must reflect changes in conditions in Iraq by assessing the progress made over the past 18 months, targeting the unmet goals of the New Way Forward, and articulating our long-term strategic objectives for Iraq. 2. DOD cited the MNF-I/U.S. embassy-Iraq Joint Campaign Plan as a comprehensive, government wide-plan that guides the effort to achieve an Iraq that can govern, defend, and sustain itself. In our review of the classified Joint Campaign Plan, however, we identified limitations to the plan, which are discussed in a separate, classified GAO report— Stabilizing Iraq: DOD Should Identify and Prioritize the Conditions Necessary for the Continued Drawdown of U.S. Forces. Further, we believe that the Joint Campaign Plan is not a substitute for an updated strategic plan for Iraq. As we stated in our report, a campaign plan is an operational, not a strategic, plan, according DOD’s doctrine for joint operation planning. A campaign plan must rely on strategic guidance from national authorities for its development. For example, the April 2006 MNF-I/U.S. embassy Baghdad Joint Campaign Plan relied on the NSC’s prior strategic plan, the National Strategy for Victory in Iraq, as a basis for the plan’s development. The classified campaign plan does not provide Congress or the American people with the administration’s road map for achieving victory in Iraq. 3. According to DOD, MNF-I and the U.S. embassy recently assessed the security line of operation and determined that the goals for the phase ending in summer 2008 have been met. We disagree with DOD’s statement that the security goals for this phase have been met. For example, The New Way Forward stated that the Iraqi government would take responsibility for security in all 18 provinces by November 2007, but only 8 of 18 provinces had transitioned to Iraqi control at that time. As of June 18, 2008, only 9 of 18 provinces had transitioned. Our classified report on the Joint Campaign Plan provides more information on the goals of the security line of operation, the various phases of the campaign plan, and a recent assessment of the security line of operation. 4. DOD stated that it is misleading for our report to characterize the Iraqi security forces capability by giving the percentage of units at Operational Readiness Assessment (ORA) level 1, noting that as of late May 2008, 70 percent of Iraqi units were in the lead in counterinsurgency operations. We added information on Iraqi units in the lead to our report. However, we believe that the report is not misleading by providing information on ORA level 1 units because this was a benchmark established by Congress and derived from benchmarks and commitments articulated by the Iraqi government beginning in June 2006. Thus, the numbers of independent Iraqi security forces as measured by ORA level 1 continue to be an important measure of the capabilities of the Iraqi security forces. Further, as we discuss in the report, the term “in the lead” has evolved to include less capable Iraqi security forces. Specifically, according to testimony of the MNF-I Commanding General, MNF-I counted only ORA level 1 and ORA level 2 units as “in the lead” in January 2007. However, as of March 2008, MNF-I was also counting some ORA level 3 units—that is, units only “partially capable of conducting counterinsurgency operations”—as in the lead in counterinsurgency operations. 5. DOD disagreed with our measuring progress in power generation against an ever-rising demand for electricity and noted that energy production has increased over the past year. We present data on the gap between supply and demand for electricity in Iraq because the Departments of State and Defense use this statistic to measure progress. We have updated our report to reflect data through May 2008 and DOD’s statement regarding the slight increase in electricity generation over the past year. 6. DOD stated that the goal upon which we measure oil production progress was an arbitrary goal set by the CPA. State Department had similar technical comments. We used the goal of 3.0 mbpd production capacity because the DOD command responsible for funding and managing oil reconstruction projects in Iraq—the U.S. Army Corps of Engineers—has consistently used this goal to measure progress in Iraq. As recently as April 2008, the U.S. Army Corps of Engineers has included this goal in its weekly update to the Secretary of the Army. We have updated our report to include oil production statistics through May 2008. 7. DOD stated that although the hydrocarbon legislation is important to the economic development of Iraq, Iraq’s oil wealth is being distributed to provinces on a reasonably equitable basis. Providing Iraq’s oil wealth through the budget process is not a sustainable solution to equitably distribute resources since allocations must be negotiated annually. The hydrocarbon legislation intends to provide an enduring resolution for the management and control of Iraq’s current and future hydrocarbon resources and the distribution of revenues from them. Furthermore, this legislation is to provide a transparent legal framework that defines the rights of foreign investors and encourages the foreign investment needed to modernize Iraq’s oil sector. 8. We updated our report to include enemy-initiated attacks data for May 2008. Unclassified attacks data for May were not available at the time we sent our draft report to the agencies for comment. In addition, the following staff contributed to the report: Judith McCloskey, Assistant Director; Tetsuo Miyabara, Assistant Director; Minty Abraham; Ashley Alley; David Bruno; Monica Brym; Daniel Chen; Lynn Cothern; Martin De Alteriis; Leah DeWolf; Timothy Fairbanks; Walker Fullerton; Matthew Helm; Dorian Herring; Patrick Hickey; Rhonda Horried; Bruce Kutnick; Jeremy Latimer; Stephen Lord; Kathleen Monahan; Mary Moutsos; Elizabeth Repko; Jena Sinkfield; and Audrey Solis.
Since 2001, Congress has appropriated about $640 billion for the global war on terrorism, the majority of this for operations in Iraq. In January 2007, the President announced The New Way Forward to stem violence in Iraq and enable the Iraqi government to foster national reconciliation. This new strategy established goals and objectives to achieve over 12 to 18 months, or by July 2008. GAO discusses progress in meeting key goals in The New Way Forward: (1) improve security conditions; (2) develop capable Iraqi security forces; and help the Iraqi government (3) enact key legislation, (4) spend capital budgets, and (5) provide essential services. GAO also discusses U.S. strategies for Iraq. GAO reviewed documents and interviewed officials from U.S. agencies, the United Nations, and the Iraqi government. GAO also had staff stationed in Baghdad. Since May 2003, GAO has issued over 130 Iraq-related audits, which provided baseline information for this assessment. GAO prepared this report under the Comptroller General's authority. The New Way Forward responded to failures in prior strategies that prematurely transferred security responsibilities to Iraqi forces or belatedly responded to growing sectarian violence. Overall violence, as measured by enemy-initiated attacks, fell about 70 percent in Iraq, from about 180 attacks per day in June 2007 to about 50 attacks per day in February 2008. Security gains have largely resulted from (1) the increase in U.S. combat forces, (2) the creation of nongovernmental security forces such as Sons of Iraq, and (3) the Mahdi Army's declaration of a cease fire. Average daily attacks were at higher levels in March and April before declining in May 2008. The security environment remains volatile and dangerous. The number of trained Iraqi forces has increased from 323,000 in January 2007 to 478,000 in May 2008; many units are leading counterinsurgency operations. However, the Department of Defense reported in March 2008 that the number of Iraqi units capable of performing operations without U.S. assistance has remained at about 10 percent. Several factors have complicated the development of capable security forces, including the lack of a single unified force, sectarian and militia influences, and continued dependence on U.S. and coalition forces. The Iraqi government has enacted key legislation to return some Ba'athists to government, give amnesty to detained Iraqis, and define provincial powers. However, it has not yet enacted other important legislation for sharing oil resources or holding provincial elections. Efforts to complete the constitutional review have also stalled. A goal of The New Way Forward was to facilitate the Iraqis' efforts to enact all key legislation by the end of 2007. Between 2005 and 2007, Iraq spent only 24 percent of the $27 billion it budgeted for its own reconstruction efforts. More specifically, Iraq's central ministries, responsible for security and essential services, spent only 11 percent of their capital investment budgets in 2007--down from similarly low rates of 14 and 13 percent in the 2 prior years. Violence and sectarian strife, shortage of skilled labor, and weak procurement and budgeting systems have hampered Iraq's efforts to spend its capital budgets. Although oil production has improved for short periods, the May 2008 production level of about 2.5 million barrels per day (mbpd) was below the U.S. goal of 3 mbpd. The daily supply of electricity met only about half of demand in early May 2008. Conversely, State reports that U.S. goals for Iraq's water sector are close to being reached. The unstable security environment, corruption, and lack of technical capacity have contributed to the shortfalls. The Departments disagreed with our recommendation, stating that The New Way Forward strategy remains valid but the strategy shall be reviewed and refined as necessary. We reaffirm the need for an updated strategy given the important changes that have occurred in Iraq since January 2007. An updated strategy should build on recent gains, address unmet goals and objectives and articulate the U.S. strategy beyond July 2008.
DOD collects information on the extent of foreign participation in its contracts to assess matters related to defense trade balances and domestic industrial base capabilities. Toward this end, DOD uses different sources of information. For defense trade information, DOD has one database for prime contract awards (DD 350 Individual Contracting Action Report) and a second database for foreign subcontract awards (DD 2139 Report of Contract Performance Outside the United States). For industrial base information, DOD periodically conducts studies of specific industry sectors using industrial base questionnaires. These studies sometimes address the level of foreign participation in a particular industry sector. The United States currently conducts defense trade with 21 countries under the terms of reciprocal defense procurement memoranda of understanding (MOU). These agreements were designed in the late 1970s to promote rationalization, standardization, and interoperability of defense equipment within the North Atlantic Treaty Organization (NATO).Consistent with relevant laws and regulations, these MOUs seek to eliminate the application of nations’ buy-national laws and tariffs relating to defense procurements. DOD’s Office of Defense Procurement (Foreign Contracting) monitors the level of two-way defense procurement activity under MOUs by preparing summaries on the annual defense trade procurement balances between the United States and the 21 countries. The Office uses these summaries internally and exchanges the data with MOU countries that give the United States their defense procurement statistics. DOD has exchanged data with six MOU countries: Finland, Germany, Israel, Norway, Spain, and the United Kingdom. DOD does not compare the other countries’ defense trade information with its own because it does not know how the other countries define and collect their defense trade information. As part of its efforts to monitor foreign procurements, DOD established in 1982 a reporting requirement to identify certain subcontracts performed outside the United States. In the fiscal year 1993 defense authorization legislation, Congress required any firm performing a DOD contract exceeding $10 million, or submitting a bid or proposal for such a contract, to notify DOD in advance if (1) the firm or any of its first-tier subcontractors intends to perform work exceeding $500,000 on that contract outside the United States and Canada and (2) such work could be performed inside the United States or Canada. This information must be made available for preparing required national defense technology and industrial base assessments. DOD regulations also require prime contractors to submit notification of contracts exceeding $500,000 when any part of the contract that exceeds $25,000 will be performed outside the United States, unless a foreign place of performance (1) is the principal place of performance and (2) is in the firm’s offer. Contracts for commercial items or identified exceptions need not be reported. First-tier subcontractors awarded subcontracts in excess of $100,000 are also subject to the reporting requirement. Prime contractors and first-tier subcontractors are required on a quarterly basis to submit information such as the type of supply or service provided, the principal place of subcontract performance, and the dollar value of the transaction. The regulation states that reports should be submitted to the Office of Foreign Contracting on the standard form DD 2139 (Report of Contract Performance Outside the United States) or in computer-generated reports. The Office enters the information into its DD 2139 database on foreign subcontracting. Although DOD purchases the majority of its defense equipment and services from contractors performing in the United States, it does purchase some from firms performing outside the United States. While subject to annual fluctuations, the value of DOD’s prime contract awards performed outside the United States remained about 5.5 percent of total DOD procurement awards from fiscal year 1987 to 1997 (see fig. 1). These awards, as a percentage of total DOD prime contract awards, ranged from a high of approximately 6.8 percent in 1991 to a low of 4.6 percent in 1995. Though the value of awards outside the United States increased during the last 2 fiscal years, it represented only 5.8 percent of total DOD prime contract award values by the end of 1997. From fiscal year 1987 through 1997, the value of DOD prime contracts performed outside the United States declined, which was consistent with the overall decline in the value of total DOD prime contract awards. As shown in figure 2, the value of DOD prime contracts performed outside the United States declined from about $12.5 billion to about $6.9 billion, while the total value of DOD prime contract awards also declined from about $197 billion to $119 billion. Data were adjusted and shown in constant fiscal year 1998 dollars. Prime contracts performed outside the United States tended to be concentrated in certain countries and products. Although DOD’s prime contracts were performed in more than 100 different countries between fiscal year 1987 and 1997, 5 countries—Germany, Italy, Japan, South Korea, and the United Kingdom—accounted for about 61 percent of total prime contract values performed outside the United States when countries were identified. While DOD awarded prime contracts outside the United States for a wide variety of items, many of the awards were concentrated in three sectors: services, fuel, and construction. Services accounted for about 41 percent of all prime contracts performed outside the United States in fiscal year 1997, while petroleum and other fuel-related products accounted for about 19 percent and construction accounted for another 17 percent. DOD also tracks the award of subcontracts performed outside the United States, but the subcontract data are limited. According to DOD’s DD 2139 data, the value of annual foreign subcontract awards ranged from a high of almost $2 billion in fiscal year 1990 to a low of almost $1.1 billion in fiscal year 1997, averaging about $1.4 billion over this period. As with prime contracts, DOD’s foreign subcontracts tended to be concentrated in only a few countries. From 1990 to 1997, Canada, Israel, and the United Kingdom accounted for about 65 percent of the subcontracts that appeared in DOD’s foreign subcontract database. The foreign subcontracts that appear in DOD’s database cover a variety of equipment such as computers, circuitry, and components for engines; aircraft; lenses; and optics as well as services such as assembly, maintenance, and testing. DOD’s Office of Foreign Contracting and DOD industrial base offices both collect and use foreign subcontract data, but they do not exchange their data with one another. In addition, the Office of Foreign Contracting has no safeguards for ensuring the accuracy and completeness of its foreign subcontract award (DD 2139) database. In our review of selected subcontracts, we found instances in which foreign subcontracts were not reported to DOD in accordance with the reporting requirement, resulting in the underreporting of foreign subcontract values. Also, the Office lacks standards and procedures for managing its database, which compromises the database’s usefulness. An Office of Foreign Contracting official said the Office does not have sufficient resources to validate the collection and management of data but reviews the reported data for inconsistencies. DOD’s Office of Foreign Contracting collects foreign subcontract information from prime contractors and first-tier subcontractors as required by law and regulation. The Office uses the data to prepare defense procurement trade balance reports on offshore activity with the 21 countries with which the United States has reciprocal procurement MOUs. While the Office’s foreign subcontract data are used for a single, narrow purpose, similar data are sometimes collected by other DOD offices and are used to prepare industrial base assessments. DOD’s periodic industrial base assessments sometimes entail evaluating reliance on foreign suppliers for specific products. DOD and military industrial base specialists rely on their own industrial base questionnaires to obtain relevant information to respond to specific requests from the military services. We spoke with numerous specialists who were not aware that data collected by the Office of Foreign Contracting existed. In addition, officials from the Office of Foreign Contracting said they have not been requested to furnish the foreign subcontract data for industrial base assessments. DOD has no process or procedures to systematically ensure that contractors are complying with the foreign subcontract reporting requirement. Furthermore, neither the law nor the regulation provides penalties for noncompliance. DOD officials said they performed a limited follow-up with contractors and are certain that contractors are reporting as required. However, responsibility for determining whether a foreign subcontract is to be reported lies with the contractor. We found that in several instances contractors had not reported their foreign subcontracts. Among the 42 foreign subcontracts we examined, 11 subcontracts totaling about $124 million did not have DD 2139 forms filed with the Office of Foreign Contracting. Contractors gave various reasons for not filing the DD 2139 forms. Some said they were unaware of the requirement to report foreign subcontract awards; others had apparently misinterpreted the law and regulation. A few of them said that the regulation was not clear and that a better understanding of the intent of the law and regulation would help them determine if they needed to report. Examples of their rationale for not filing included the following: Two contractors stated that Defense Acquisition Circular 91-5 rescinded the DD 2139 form. However, the circular deleted only the form and not the reporting requirement. Also, a subsequent circular reinstated the DD 2139 form. One contractor interpreted the dollar thresholds in the reporting requirement as applying only to the foreign subcontracts, not to the value of the prime contract. This contractor did not file a DD 2139 form, even though the value of the prime contract was above the $500,000 threshold, because the foreign subcontract was below this amount. The regulation required a contractor to report foreign subcontracts greater than $25,000 for prime contracts exceeding $500,000. One contractor awarded a foreign subcontract as part of a co-production program with Germany. The contractor cited the existence of an MOU between the United States and Germany for a specific program as the justification for not filing a DD 2139 form. We found no support for the contractor’s position in the MOU, which aims “to use industrial capabilities in both countries by providing both industries a fair chance to compete on a dual-source basis and by initiating co-production of components.” The MOU is also subject to the respective countries’ national laws, regulations, and policies. One contractor said its foreign subcontract was for a component that had to be produced outside the United States because its design was solely owned by a foreign firm. According to the contractor, no U.S. or Canadian firm was licensed to produce it, although the U.S. company had the manufacturing capability to produce this item. Given this circumstance, a company official said that he believed that the company did not have to report this subcontract. The official, however, expressed uncertainty about the reporting requirement and later indicated that the company would report this subcontract to the Office of Foreign Contracting. We also identified 12 subcontracts, which were valued at almost $67 million, that were not reported to the Office because of possible weaknesses in the procedures used to collect foreign subcontract data. First, contracts should include the foreign subcontract reporting requirement to ensure that contractors report their foreign subcontracts to the Office. We found one contractor that did not file the DD 2139 forms for four subcontracts because the reporting requirement was erroneously omitted from the prime contract. Second, we found three companies that did not file DD 2139 forms for eight subcontracts because, consistent with the reporting requirement, this information was reported in their initial offers and was submitted either to the contracting officer or to the prime contractor. However, the information was not forwarded to the Office by the contracting officers as stipulated by the regulation. The contracting officers we spoke with were not aware they were required to send this information to the Office of Foreign Contracting for inclusion in the DD 2139 database. Although the law requires advance notification of contract performance outside the United States, we spoke with several contractors that regularly submitted DD 2139 information but used different criteria for identifying a foreign subcontractor. The various criteria included (1) no U.S. taxpayer identification number, (2) incorporation outside the United States, (3) foreign ownership, (4) place of contract performance, and (5) requirement of an export license. Sometimes this caused contractors to report transactions differently, which would create inconsistent data. For example, one contractor said it would report subcontracts awarded to a foreign subsidiary of a U.S. company because the subsidiary would be manufacturing overseas. However, another contractor said it would not report a subcontract awarded to a foreign subsidiary of a U.S. company because the subsidiary is domestically owned. Contractors also lack clear guidance about whether deobligations of foreign subcontracts should be reported. Currently, the Office of Foreign Contracting enters any subcontract deobligations voluntarily reported by contractors into its database, but there is no requirement that contractors report these deobligations. As such, deobligations are being reported inconsistently. The DD 2139 database lacks documentation defining the database’s structure, critical data fields, and procedures for data entry, all of which makes the data highly questionable. For example, no written procedures exist for querying the database for the total dollar value of foreign subcontracts awarded. As a result, determining the dollar value of these subcontracts can lead to varying values, depending on the method used to query the database. We queried the database using two different methods and obtained a difference of $15.3 million in the total dollar value of foreign subcontracts in fiscal year 1997 and a difference of $2.8 billion for fiscal year 1990 to 1997. The current DD 2139 database structure of 30 data fields is based on a November 1985 version of the DD 2139 form (Subcontract Report of Foreign Purchases). However, some of the data no longer need to be reported. For example, the database contains six data fields of dollar values, but only two of the six fields are needed to calculate the value of foreign subcontracts awarded. According to an agency official, the remaining four data fields are irrelevant. The DD 2139 database also contains two fields related to offsets, but contractors are no longer required to submit this information. The Office of Foreign Contracting, however, continues to enter into its database offset information when it is voluntarily provided by contractors. The lack of standards and procedures for data entry has caused numerous data entry errors that compromise the database’s usefulness. Data entry errors included blank critical fields; keypunch errors; duplicate entries; contract values for U.S. subcontractors; and inconsistent entries of prime contract numbers, prime contractor names, and weapon systems names. In fiscal year 1997, we found that 2 prime contractors’ names had been entered with 10 or more different variations. Inconsistent data entry makes it difficult to query the DD 2139 database or use another database to validate its completeness. Programming errors in the DD 2139 database resulted in some underreporting of foreign source procurements. We examined the database structure for fiscal year 1997 and found some incorrectly coded database records. The miscoding of data for 1 year caused 13 out of 1,412 data records to be omitted from the total value of foreign subcontracts. As a result, the total value of foreign subcontracts for fiscal year 1997 was understated by $1.15 million, of which $802,249 was related to MOU countries. No error detection and correction procedures have been established to ensure the integrity of the DD 2139 database. As a result, the database contained information that was inconsistent with the reporting criteria specified in the statutory requirement. For example, the database should contain subcontracts awarded to foreign sources only for DOD prime contracts. For fiscal year 1997, the database included 25 out of 1,412 subcontracts totaling $2.8 million for the National Aeronautics and Space Administration, an independent civilian agency. We also found that one U.S. defense contractor reported its foreign subcontracts for sales to both DOD and foreign governments (the latter sales are known as direct commercial sales). Although the contractor’s submittal clearly distinguished between DOD and direct commercial sale subcontracts, the Office included the data on subcontract awards for direct commercial sales in the database. Data on DOD subcontracts performed outside the United States could provide important information for making decisions on foreign sourcing and industrial base issues. The Office of Foreign Contracting collects information on contracts performed outside the United States to prepare defense trade reports. DOD industrial base specialists collect similar information for periodic industrial base assessments but are unaware of the data the Office has collected. In addition, weaknesses in the Office’s data collection process significantly limit DOD’s ability to use consistent data on foreign subcontract-level procurements. The Office has made no effort to improve contractor compliance with the foreign subcontract reporting requirement, resulting in underrepresentation of the level of foreign subcontracting activity. Poor database management also undermines the credibility and usefulness of the Office’s foreign subcontract data. We recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition and Technology to review the existing subcontract reporting requirement and amend it, as needed, to ensure that data collected satisfy the common information needs of the offices working on defense trade and industrial base issues, thus also avoiding duplicative data collection efforts within DOD. As part of this effort, DOD should provide additional guidance containing clear criteria and definitions for reporting foreign subcontracts. We also recommend that the Under Secretary of Defense for Acquisition and Technology direct the Director of the Office of Defense Procurement to develop and implement controls and procedures for periodically verifying compliance with the foreign subcontract reporting requirement and specify how to transmit the information to the Office of Foreign Contracting as a means of improving the completeness and consistency of its data and develop and implement procedures for entering data, verifying critical fields, documenting database programs, and querying the database to improve the Office’s database management practices. In commenting on a draft of this report, DOD did not agree with the need for our first recommendation to ensure that data being collected satisfy common user needs. DOD stated that existing regulations and procedures governing the generation of data needed to address defense trade and industrial base issues are sufficient as it provides the data it collects to other groups within DOD. Our review, however, demonstrated that similar data are being collected by other offices. Further, our recommendation is in accordance with DOD’s policy that states the Department will regularly review and evaluate opportunities for improvements to increase the usefulness of information and reduce the cost of information collection activities for both DOD and contractors. We have modified our recommendation to clarify that we are referring to the existing data collection requirement. DOD also stated that the reporting requirement is clear from the language in the relevant Defense Federal Acquisition Regulation Supplement clause. However, the reporting requirement has been interpreted differently by contractor and government officials. The varying interpretations indicate a lack of understanding about what subcontracts should be reported, which detracts from the consistency of information actually contained in the database. DOD did not fully concur with our second recommendation to improve the collection and management of foreign subcontract data. Our findings relating to poor database management arose from our attempts to use the database to determine the total value of DOD’s foreign subcontract awards. We could not determine the total value from the database. First, some entries were coded so as not to be counted in the totals. Second, subcontracts for National Aeronautics and Space Administration procurements and for direct commercial sales, which should not be included in this database, were. Third, subcontracts performed in the United States, which should not be included in this database, were. Finally, in our attempt to match entries from the DD 2139 database with the subset of information on foreign subcontracts found in the Defense Contract Management District International database, we found subcontracts that should have been in the DD 2139 database but were not. Taken together, these findings represent a significant degradation of the value of the information. If DOD plans to use the data, and a recent directive by the Under Secretary suggests that it will become more important, the integrity of the data needs to be enhanced. Our analysis showed that these problems are directly attributable to the lack of controls and procedures for periodically verifying compliance with the reporting requirement and the lack of procedures for managing and using the database. DOD stated that it already maintains the most complete database on foreign subcontracting and that periodically verifying compliance would be too costly. Having the most complete database does not address the value of the data contained in it. In addition, periodically verifying compliance with the reporting requirement could be accomplished as part of contracting officers’ routine oversight responsibilities. DOD agreed that there are no written procedures for managing and using the DD 2139 database, but stated that none are needed. DOD guidance, however, states that database managers must have written documentation to maintain their systems. Having written procedures for managing and using the database, such as controls for data entry and verification, are important to ensuring the reliability, accuracy, and usefulness of the information contained in the DD 2139 database. DOD’s written comments and our evaluation of them are discussed in appendix II. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Armed Services and the House Committee on National Security; the Secretary of Defense; and the Director, Office of Management and Budget. We will also make copies available to others upon request. Please contact me at (202) 512-4841 if you have any questions concerning this report. Major contributors to this report are listed in appendix III. To determine trends in the Department of Defense’s (DOD) foreign sourcing, we analyzed DOD’s DD 350 data on prime contract awards, which were adjusted to reflect constant 1998 dollars, from fiscal year 1987 to 1997. We examined the amounts DOD purchased at the prime contractor level by country and by item. We performed a similar assessment of DOD’s data on foreign subcontract awards. However, we did not include a trend analysis of DOD’s foreign subcontract procurements because of the data weaknesses described in this report. In addition, we reviewed DOD’s annual reports to Congress on purchases from foreign entities for fiscal year 1995 to 1997 and the laws and regulations requiring advance notification of contract performance outside the United States. We discussed the law and regulations with DOD and industry officials. We also examined DOD’s policy and the chronology of changes to regulations regarding this reporting requirement. To determine the completeness of DOD data collection efforts, we tried to compare the DD 2139 database to other government and commercial databases. We were unable to use many of the sources we identified because they did not contain fields that could be readily compared to the DD 2139 database. We obtained foreign subcontract data from the Defense Contract Management District International (DCMDI) and the Defense Contract Management Command’s (DCMC) customs team. Each of these data sources contained similar information to the DD 2139 database, including prime and subcontract numbers, transaction values, and subcontractor names. However, the DCMC import data were based on actual deliveries and not contract awards, unlike the DD 2139 and DCMDI data. Given the time difference between contract award and delivery data, we concentrated on matching the DD 2139 database with the DCMDI database. DCMDI’s database is used internally to track foreign subcontracts administered by the district’s field offices and is not representative of the universe of foreign subcontracts. We did not perform a reliability assessment of the DCMDI database because we only used it to identify possible unreported foreign subcontracts that we could trace back to original source documentation. To compare the two databases, we performed an automated and manual match of fiscal year 1997 DCMDI data with multiple years of the DD 2139 data to verify data entries. We sampled data records from the DCMDI database, which led us to examine 49 foreign subcontracts. We then eliminated all National Aeronautics and Space Administration and fuel and subsistence contracts because these types of subcontracts are excluded from the foreign subcontract reporting requirement. By comparing the two data sets, we found 7 subcontracts that matched and 42 subcontracts that did not appear in the DD 2139 database. For the 42 subcontracts, we obtained contractual documentation from the DCMC field offices and contractors to verify information about the prime contracts and subcontracts and to ensure that the contracts contained the foreign subcontract reporting requirement clause. We interviewed the contractors to determine whether they reported the foreign subcontracts to the Office of Foreign Contracting and discussed reasons for not reporting. We also interviewed officials from several defense companies, DCMC representative offices, and program offices. We discussed with company officials their processes for tracking foreign subcontracts and compliance with the DD 2139 reporting requirement. We obtained supplier lists for two defense programs and surveyed several subcontractors about the DD 2139 reporting requirements and corresponding regulations. With DOD officials, we discussed their procedures for monitoring subcontracts, including subcontractor performance, and for reviewing and approving requests for duty-free entry of foreign imports. To assess DOD’s management of data on foreign subcontract procurements, we reviewed DOD’s DD 2139 database for fiscal years 1990 through 1997, which was the only automated data available during our review. We performed various programming queries to examine the database structure and critical fields. We discussed with Office of Foreign Contracting officials the process for ensuring proper data entry, including error detection and correction procedures, reconciliation of output reports with input entries, and verification of contractor compliance with the reporting requirement. We requested documentation describing or evaluating the data system, but none was available. We did not compare the DD 2139 data with original source documents because no criteria, such as written standards for data entry and management, exist. We performed our review between January and September 1998 in accordance with generally accepted government auditing standards. Limitations of the DD 2139 database have been identified and discussed in earlier sections of the report. Where possible, corroborating evidence was obtained from other databases and original source documentation. The DD 350 database provides the most commonly used information on DOD procurements. However, we did not assure the reliability of the DD 350 data. The following are GAO’s comments to DOD’s letter dated October 29, 1998. 1. We are not proposing the establishment of a new data collection requirement. Instead, we are recommending that the current data collection efforts be enhanced to satisfy the information needs of the offices working on defense trade and industrial base issues. Such action would be in compliance with DOD policy to regularly review and evaluate opportunities for improvements to increase the usefulness of information and reduce the cost of information collection activities for both DOD and contractors. If the collection of foreign subcontracting award data (DD 2139) were improved, the data could meet multiple information needs. To avoid further misunderstanding, we have clarified the wording of our recommendation. 2. Prior to our review, the Office of Industrial Capabilities and Assessments was unaware of the data collection efforts of the Office of Foreign Contracting. The Office of Industrial Capabilities and Assessments has an industrial base questionnaire that, in part, collects information on subcontractors similar to the information collected by the Office of Foreign Contracting. The two offices would benefit from coordinating with each other to avoid some duplication of effort and alleviate burdening industry for similar information. According to DOD policy, information should be collected in a nonduplicative and cost-effective manner. Moreover, the Under Secretary of Defense for Acquisition and Technology recently initiated reviews of the globalization of the defense industrial base and its effects on national security. Information on suppliers located outside the United States, particularly those at lower tiers, such as that collected by the Office of Foreign Contracting, or owned by foreign entities, will be instrumental in evaluating the extent and effects of this globalization. 3. The Office of Foreign Contracting has no mechanism for systematically verifying contractor compliance with the foreign subcontract reporting requirement. Unless some verification is performed, DOD has no assurance of the accuracy of the total value of foreign subcontracts awards. We recognize that the Office of Foreign Contracting has limited resources for performing an extensive verification of contractor compliance. To assist in the verification process, some follow-up could be performed by contracting officers because defense companies are required to submit certain DD 2139 information to them. However, the contracting officers that we spoke with were often unaware of this reporting requirement and, therefore, would need to be educated about the requirement so that they could periodically check contractor compliance when performing routine oversight functions such as certifying duty-free entry of imported items. In 1989 we reported that the Office of Foreign Contracting sent a letter to the top 100 prime contractors informing them of the foreign subcontract reporting requirement and found that about one-third had reported. The Office of Foreign Contacting has not performed another survey of defense companies since then. Furthermore, officials from defense companies told us that the Office of Foreign Contracting had not contacted them to verify the data they had submitted. Periodic follow-up with the defense companies would help ensure that erroneous information, such as subcontract awards for nondefense contracts, would not be submitted. 4. For awarded contracts, the reporting requirement provides instructions on when and how contractors are to report subcontract performance outside the United States to the Office of Foreign Contracting. However, for offers exceeding $10 million, if the company is aware at the time its offer is submitted that it or its first-tier subcontractor intends to perform any part of the contract that exceeds $500,000 outside the United States and Canada, and if that part could be performed inside the United States or Canada, DD 2139 information must be submitted with the offer to the contracting officer. The regulation (Defense Federal Acquisition Regulation Supplement 225.7202) then stipulates that contracting officers are to forward a copy of reports submitted by successful offerors to the Office of Foreign Contracting. However, the contracting officers we spoke with were not aware that the regulation instructed them to forward any information to the Office of Foreign Contracting and had never provided the Office with such information. Consequently, information provided in firms’ offers is not being fully captured by the Office’s database. 5. Poor database management practices undermine the reliability of DOD’s foreign subcontract data. The Office of Foreign Contracting lacks appropriate written standards for entering and verifying data. Such standards are necessary to ensure the reliability and integrity of the data. Our example of a programming error that resulted in 13 miscoded data entries merely illustrates the problems that can arise when no system controls are in place. DOD’s calculation of an error rate based on these 13 entries is erroneous and misleading. It is erroneous because statistical inferences such as error rates must be based on a random statistical sample assessed against defined parameters such as written procedures for data entry, verification, or database queries. Without such documentation, we were unable to assess data reliability fully. It is also misleading because, as detailed in our report, we found numerous other examples of problems with the DD 2139 database that undermine its credibility. Besides the programming errors, we found data entry errors such as the inclusion of National Aeronautics and Space Administration subcontracts, direct commercial sales subcontracts, and U.S. subcontract awards. Other problems included evidence of noncompliance with the reporting requirement and inconsistent treatment of data. These problems support the need for written standards explaining the DD 2139 database’s structure, data fields, and procedures for data entry and verification. Raymond J. Wyrsch The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) foreign procurement data, focusing on DOD's: (1) reported trends on contracts performed outside the United States; and (2) use of foreign subcontract information and the completeness and accuracy of how DOD collects and manages its data. GAO noted that: (1) for prime contracts, DOD purchases the majority of its defense equipment and services from contractors operating in the United States; (2) though subject to annual fluctuations, DOD's prime contract awards outside the United States remained about 5.5 percent of total DOD contract awards from fiscal year (FY) 1987 to 1997; (3) over this period, the value of DOD prime contracts performed both in and out of the United States declined; (4) prime contracts performed outside the United States tended to be concentrated in certain countries such as Germany, Italy, Japan, South Korea, and the United Kingdom and in certain sectors such as services, fuel, and construction; (5) at the subcontract level, the value of DOD's reported foreign subcontract awards ranged from almost $2 billion in FY 1990 to almost $1.1 billion in FY 1997, but this data has its limitations; (6) the Office of Foreign Contracting does not consider the data needs of industrial base specialists in its efforts to collect foreign subcontract data; (7) industrial base specialists are often unaware that data of this nature are available; (8) furthermore, weaknesses in the Office of Foreign Contracting's data collection and management processes undermine DOD's ability to use the foreign subcontract data for defense trade and industrial base decision-making; (9) the Office has no mechanism for ensuring that contractors provide required foreign subcontract information, which contributes to the underrepresentation of foreign subcontract activity; (10) GAO's review of selected subcontracts disclosed instances in which foreign subcontracts were not reported to the Office because contractors were unaware of the reporting requirement or misunderstood the criteria for reporting a foreign subcontract; and (11) the Office's poor database management also compromises the credibility and usefulness of its foreign subcontract data.
Title XVII of EPAct 2005—Incentives for Innovative Technologies— authorized DOE to guarantee loans for projects that satisfy all three of the following criteria: (1) decrease air pollutants or man-made greenhouse gases by reducing their production or by sequestering them (storing them to prevent their release into the atmosphere); (2) employ new or significantly improved technologies compared with current commercial technologies; and (3) have a “reasonable prospect” of repayment. Title XVII identifies 10 categories of projects that are eligible for a loan guarantee, such as renewable energy systems, advanced fossil energy technologies, and efficient end-use energy technologies. Appendix II provides a list of these categories. The LGP office is under DOE’s Office of the Chief Financial Officer. LGP’s actions are subject to review and approval by a Credit Review Board. The Board met for the first time in April 2007; it approves major policy decisions of the LGP, reviews LGP’s recommendations to the Secretary of Energy regarding the issuance of loan guarantees for specific projects, and advises the Secretary on loan guarantee matters. DOE first received appropriated funds for the LGP’s administrative costs in early 2007 and began processing preapplications—in response to the August 2006 solicitation—and at the same time began to obtain staff and take other steps to initiate the program. During 2007, it reviewed preapplications for 143 projects and in October 2007 invited 16 of the preapplicants to submit full applications for loan guarantees. Appendix II includes information on the 16 projects invited to submit full applications. In general, according to DOE, the processing of full applications will require DOE to have numerous interactions with the applicants and private lenders. It will also require financial, technical, environmental, and legal advisors to assist with underwriting, approving, and issuing a loan guarantee. DOE estimated that the time between receiving an application and completing negotiations for a loan guarantee contract would range from 9 to 25 months, with additional time at the beginning to prepare and issue the solicitation and at the end to close the loan. On April 11, 2008, DOE issued a fiscal year 2008 implementation plan for $38.5 billion in solicitations, to respond to a requirement that DOE provide Congress information about future solicitations 45 days prior to issuing them. On June 30, 2008, DOE simultaneously issued three solicitations that total $30.5 billion—on (1) efficiency, renewable energy, and electric transmission ($10 billion), (2) nuclear power facilities ($18.5 billion), and (3) nuclear facilities for the “front end” of the nuclear fuel cycle ($2 billion). DOE plans to subsequently issue a fourth solicitation in late summer 2008 for advanced fossil energy projects ($8 billion). DOE is also required to annually provide Congress a report on all activities under Title XVII and issued the first report on June 15, 2007. Figure 1 shows a timeline of these and other key program events since 2005 that illustrate the status of the LGP through June 2008. On October 23, 2007, DOE’s final regulations for the LGP were published in the Federal Register. DOE had previously issued program guidelines in August 2006. The final regulations contain requirements for preapplication and application submissions; programmatic, technical and financial evaluation factors for applications; and lender eligibility and servicing requirements. The regulations incorporate and further clarify requirements of Title XVII related to eligibility, fees, default conditions, and audit documentation. The regulations also generally incorporate requirements set forth in OMB Circular A-129 Policies for Federal Credit Programs and Non-Tax Receivables, which prescribes policies and procedures for federal credit programs, such as applicant screening, lender eligibility, and corrective actions. Because loan guarantee programs pose significant financial risks, it is important to include appropriate mechanisms to help protect the federal government and American taxpayers from excessive or unnecessary losses. DOE changed some key aspects of the initial program guidelines in its final regulations to help make the program more attractive to lenders and potentially reduce financing costs for projects. These changes included increasing the maximum guarantee percentage, allowing the lender to separate or “strip” the nonguaranteed portion of the debt, and revising its interpretation of a Title XVII requirement that DOE have superior right to project assets pledged as collateral. Other important changes relate to increased specificity in key definitions and a requirement for independent engineering reports. Specifically, we found the following: Guarantee percentage. The final regulations allow for loan guarantees of up to 100 percent of the loan amount, which is limited to no more than 80 percent of the project costs, provided that, for a 100 percent guarantee, the loan must be disbursed by the Federal Financing Bank (FFB). The use of the FFB is required, in part, because a private lender may exercise less caution when underwriting and monitoring a loan with a 100 percent guarantee. The guidelines stated that DOE preferred not to guarantee more than 80 percent of the loan amount, which was limited to no more than 80 percent of the project costs. Because the regulations increased the maximum guarantee percentage, this change increases the risk that the government is willing to assume on a project by project basis. Stripping the nonguaranteed portion. When DOE guarantees 90 percent or less of a loan, the final regulations allow the nonguaranteed portion of a loan to be separated or “stripped” from the guaranteed portion. This change allows lenders greater flexibility in selling portions of a loan on the secondary market and could reduce overall funding costs for projects. In contrast, the guidelines and the proposed regulations did not allow stripping. Superiority of rights. Title XVII requires DOE to have superior rights to project assets pledged as collateral. In the proposed regulations, DOE interpreted this provision to require DOE to possess first lien priority to assets pledged as collateral. Therefore, holders of nonguaranteed portions of loans would be subordinate to DOE in the event of a default. In the final regulations, DOE changed its interpretation to allow proceeds received from the sale of project assets to be shared with the holders of nonguaranteed portions of loans in the event of a default. As noted in public comments on the proposed regulations, this practice is an established norm in project lending. DOE stated that it retains superiority of rights, as required by Title XVII, because DOE has sole authority to determine whether, and under what terms, the project assets will be sold at all. Key definitions. In the context of “innovative technologies,” the final regulations added a definition that clarified the definition of what constitutes a “new or significantly improved” technology, considerably expanded the definition of “commercial” technology already in use, and clearly linked the definitions to each other. According to the regulations, a new or significantly improved technology is one that has only recently been developed or discovered and involves a meaningful and important improvement in productivity or value in comparison with the commercial technology in use. DOE’s regulations define a commercial technology as being in general use if it is employed by three or more commercial projects in the United States for at least 5 years. Independent engineering report. The final regulations require the applicant to provide an independent engineering report on the project, which was not required under the guidelines. According to the regulations, the engineering report should assess the project, including its site information, status of permits, engineering and design, contractual requirements, environmental compliance, testing and commissioning, and operations and maintenance. Although the final regulations generally address requirements from applicable guidance, we identified one key aspect related to equity requirements that is not clear. The final regulations state that DOE will evaluate whether an applicant is contributing significant equity to the project. The regulations define equity as “cash contributed by the borrowers and other principals.” Based on this definition, it appears that non-cash contributions, such as land, would not be considered equity. However, the LGP director told us that land and certain other non-cash contributions could be considered equity. As a result, the regulations do not fully reflect how DOE is interpreting equity and potential applicants may not have a full understanding of the program’s equity requirements. DOE may not be well positioned to manage the LGP effectively and maintain accountability because it has not completed a number of management and internal control activities key to carrying out the program. As a result, DOE may not be able to process applications efficiently and effectively, even though DOE has begun to review its first application, and officials told us they will begin reviewing other applications as soon as they are submitted. The key activities that DOE has not sufficiently completed include (1) clearly defining its key milestones and its specific resource needs, (2) establishing policies and procedures for operating the program, and (3) agreeing upon key measures to evaluate program progress. The nature and characteristics of the LGP expose the government to substantial inherent risk; implementing these management and internal control tools is a means of mitigating some risks. According to our work on leading performance management practices, agencies should have plans for managing their programs that identify goals, strategies, time frames, resources, and stakeholder involvement in decision making. In January 2008 DOE completed a “concept of operations” document that contains, among other things: information on the LGP’s organizational structure; mission, goals, and objectives; and timelines, milestones, and major program activities that must be accomplished and their sequence. However, LGP officials told us they do not consider the concept of operations a strategic or performance planning document. In addition, it is unclear whether LGP plans to set other timelines and milestones that would be available to stakeholders, such as applicants and Congress. Without associating key activities with the time frames it aims to meet, it is unclear how DOE can adequately gauge its progress or establish and maintain accountability to itself and stakeholders. As of March 2008, 14 of the 16 companies invited to submit full applications reported that they plan to submit their applications to DOE by the end of September 2008, and the other 2 plan to submit by the end of January 2009. DOE received one application in April 2008, which it has begun to review, and DOE officials told us they will begin reviewing other applications as soon as they are submitted. This influx of applications could cause a surge in workload, but it is not clear that DOE has obtained the resources it needs to carry out its application review activities. Although it is critical for agencies to determine the timing and type of resources needed, DOE has not determined the number and type of contractor resources it will need to review the applications, which could lead to delays. For example, DOE expects to need legal, engineering, environmental, and financial contracting expertise but has not completed plans describing the types of expertise needed, estimated when the expertise will be required, or determined to what extent each type of expertise will be needed. According to the LGP director, much of this expertise will have to be acquired through new contracts that DOE must negotiate and that generally take some months to put into place. To the extent that these resources are not available when needed, DOE could experience delays in reviewing the applications. In early April 2008, the LGP director said that his office is working with other DOE offices to develop these contracts and considers this activity high priority; while the completion date for an acquisition and contract vehicles strategy was initially set for the end of April, the timetable DOE includes in its agency comments letter indicates an August 2008 completion date. In addition, as of April the LGP office was 7 staff short of its authorized level of 16 for fiscal year 2008; the director told us it has faced delays in hiring permanent staff, although he indicated that the office has enough permanent staff to review the first 16 applications. He also said that the permanent and contractor staff LGP has hired have many years of project finance or loan guarantee experience at other institutions. Management has a fundamental responsibility to develop and maintain effective internal controls to help ensure that programs operate and resources are used efficiently and effectively to achieve desired objectives and safeguard the integrity of their programs. As of May 2008, DOE had not completed policies and procedures to select loans, identify eligible lenders and monitor loans and lenders, estimate the costs of the program, or account for the program, despite reporting to Congress in June 2007 that it would have completed most of these activities by the end of fiscal year 2007. OMB Circular A-129 calls for agencies to develop policies and procedures to select loans, including appropriate applicant screening standards to determine eligibility and creditworthiness. In this regard, from August 2006 through October 2007, DOE conducted a preapplication process to help it develop final regulations; develop and test policies, criteria, and procedures for reviewing preapplications; and determine which projects it would invite to apply for loan guarantees. Conducting the preapplication process also enabled DOE to respond to congressional interest in launching the program, according to DOE officials. We found that, during its preapplication review process, DOE did not always sufficiently document why it ultimately selected projects that reviewers did not score highly or recommend initially. DOE documented the results of the selection process, including its technical and financial reviews for individual projects, its joint technical-financial reviews for categories of projects, and its decisions made during its secondary review process. However, we found that DOE’s documentation for deciding which projects to recommend to the Credit Review Board did not always provide sufficient justification. While our discussions with DOE officials helped clarify the documentation for 6 of the 16 invited projects, they did not for 2 of those projects. According to DOE officials, they gave greater weight to the technical merit than the financial merit of the projects during the preapplication selection process. In addition, a consultant DOE hired to review the preapplication process found that although the files were in “good working order,” DOE did not consistently conduct and document its technical evaluations and did not document financial evaluations in depth. The consultant recommended that DOE take steps to establish standards for these evaluations and increase the level of transparency in the preapplication evaluation process. We also found that the financial and technical criteria DOE used to review the preapplications were not sufficiently defined in some cases. For example, a requirement that is central in considering projects’ overall eligibility—whether it is “innovative,” also known as “new and significantly improved”—was difficult to determine, according to several program managers and reviewers. After the initial review process was completed, DOE further defined what it considers “new and significantly improved” in its final regulations, but has not correspondingly updated the review criteria. In addition, when DOE conducted its financial reviews, it evaluated projects by assigning scores between zero and four—with zero being the weakest score and four being the strongest score. However, DOE did not define what the possible scores signified. Moreover, 60 percent of a preapplicant’s financial score was based on creditworthiness; yet, DOE did not require preapplicants to submit pertinent financial and credit information such as audited financial statements or credit histories. DOE has not fully developed detailed internal policies and procedures, including criteria, for selecting applications. To review the first 16 projects, DOE officials told us they will use criteria developed for the preapplication process. For projects that apply in response to future solicitations, DOE plans to amend current preapplication criteria and develop additional evaluation factors that will be specific to certain technology areas or sectors. According to DOE officials, as of May 2008, DOE has also hired one staff person to develop credit policies and procedures specific to LGP, and to fully establish its credit policy function. They also said that these credit policies and procedures would provide internal guidance related to some aspects of application review. DOE officials told us they also expect the application process guidance they developed for companies to also serve as internal review policies and procedures. This guidance provides instructions on the content and format applicants should adhere to when applying for a guarantee, such as background information; a project description; and technical, business, and financing plans. The guidance generally aligns with information in the final regulations on the factors DOE plans to review and should make it easier for companies to develop applications. However, in some cases the guidance lacks specificity for applicants. In addition, when considering the guidance for use as internal policies and procedures, as DOE has indicated it will be used, we determined that it does not contain criteria or guidance that would be sufficient for DOE reviewers. Specifically, it lacks instruction and detail regarding how DOE will determine project eligibility and review applications, such as roles and responsibilities, criteria for conducting and documenting analyses, and decision making. For example, we found the following: Project eligibility. DOE does not delineate how it will evaluate project eligibility—that is, how each project achieves substantial environmental benefits and employs new or significantly improved technologies. The guidance requires applicants to submit background information on the technologies and their anticipated benefits but does not require enough detail for DOE to assess the information. Without such detail, it is unclear how DOE will measure each project’s contribution to the program. Independent engineer’s report. DOE’s guidance does not provide sufficient detail on the technical information applicants should submit in this report, even though the guidance requires that the report comprehensively evaluate five technical elements as well as contractual requirements and arrangements. DOE officials told us that applicants generally develop this report for investors and that the reports will likely be of varying quality and detail. DOE officials also expect that, in developing a separate report that assesses this information, they will likely need to fill considerable gaps and conduct additional analyses. While DOE recognizes these reports serve an important due diligence function, DOE has not provided applicants with specific instructions on what to include. As a result, DOE is likely to lose efficiency and effectiveness when it uses the reports to aid in evaluating loan guarantee applications. Creditworthiness. For a company to be eligible for a loan guarantee, a reasonable prospect of repayment must exist and the applicant cannot have delinquent federal debt, which is critical to determine at the beginning of the review process to assess whether an applicant is even eligible. Therefore, a sound assessment of creditworthiness is essential. However, the criteria DOE has established to evaluate creditworthiness— which it used during the preapplication process and plans to use for future applications—did not take into account the more meaningful and thorough information required for the full application process. In addition, while DOE’s guidance requests applicants to submit more complete information, such as a credit assessment, it does not provide details regarding how DOE will evaluate the information to determine creditworthiness. Project cost information. DOE’s guidance for the application process instructs applicants to indicate if their cost estimates are firm or subject to change, but it does not request applicants to report a level-of-confidence in their total project estimates. GAO has reported that for management to make good decisions and determine if a program is realistically budgeted, the estimate must quantify the uncertainty so that a level of confidence can be given about the estimate. For example, an uncertainty analysis could inform DOE management that there is a 60 percent chance that a project’s cost will be greater than estimated. Without requiring information on the uncertainty in project cost estimates and specifying how it will assess that information, DOE may not be able to appropriately determine a project’s feasibility and identify projects that could eventually require substantially more investment or loans for completion. Without sufficient internal policies and procedures that correspond to application components, DOE’s application review process will lack transparency and it will be difficult for DOE to consistently, thoroughly, and efficiently evaluate project applications. OMB Circular A-129 calls for agencies to establish policies and procedures to identify eligible lenders and to monitor loans and lenders. DOE has hired a director of monitoring and, according to DOE officials, is currently developing policies and procedures that will include (1) processes for identifying eligible lenders through a competitive process, as well as an associated checklist and guide for evaluating potential lenders, and (2) loan servicing and monitoring guidelines. These policies and procedures may build upon the monitoring policies of the Overseas Private Investment Corporation (OPIC). Implementing rigorous monitoring policies and procedures will help DOE ensure the success of the loan guarantee program. According to DOE officials, these policies and procedures will be completed before DOE issues the first loan guarantees. As required by the LGP’s fiscal years 2007 and 2008 appropriation, DOE plans to charge borrowers fees to cover subsidy costs, as permitted by Title XVII. However, estimating the subsidy cost for the LGP will be difficult because of inherent risks due to the nature and characteristics of the program. To the extent that DOE underestimates the costs and does not collect enough fees from borrowers, taxpayers will ultimately be responsible for any shortfall. Therefore, it is critical that DOE have a sound and comprehensive methodology to develop its cost estimates. Guidance on preparing subsidy cost estimates lists procedures necessary to estimate subsidy costs, such as the development of a cash flow model; the review and approval process; and documentation of the cash flow model and underlying assumptions. OMB Circular A-129 requires agencies to develop models to estimate subsidy costs before obligating direct loans and committing loan guarantees. According to LGP officials, DOE has submitted a draft subsidy cost model to OMB for approval and has drafted documentation for the subsidy calculation process. Title XVII requires DOE to collect fees from borrowers to cover applicable administrative costs. Such costs could include costs associated with evaluating applications; offering, negotiating, and closing guarantees; and servicing and monitoring the guarantees. The federal accounting standard for cost accounting states that cost information is an important basis for setting fees and reimbursements and that entities should report the full cost of programs, including the costs of (1) resources the office uses that directly or indirectly contribute to the program, and (2) identifiable supporting services other offices provide within the reporting entity. While DOE has prepared a schedule of fees to be charged for the first solicitation, it could not provide support for how it calculated the fees. DOE officials stated that they used professional judgment as a basis for the fee structure. However, DOE has not developed polices and procedures to estimate administrative costs, including a determination of which costs need to be tracked. For example, DOE has not tracked administrative costs associated with the time general counsel staff have spent working on issues related to the LGP. Therefore, DOE lacks assurance that the fees it collects will fully cover applicable administrative costs, particularly support costs from offices outside of the LGP office, such as the general counsel. According to DOE officials, some element of judgment must be used at this time in the determination of fees and as more experience is gained they will be able to develop policies and procedures designed to ensure that adequate fees are collected to cover administrative costs. In April 2008, DOE officials told us that policies and procedures to account for the LGP are nearly complete. Under the LGP regulations, DOE may issue loan guarantees for up to 100 percent of the loan amount as long as FFB disburses the loan. OMB Circular A-11, Preparation, Submission and Execution of the Budget, calls for credit issued by FFB to be budgeted for as a direct loan. Because the accounting treatment mirrors the budgeting, DOE would also account for such loans as direct loans. Accordingly, DOE has indicated that the policies and procedures will cover accounting for both direct loans and loan guarantees. DOE has also not completed the measures and metrics it will use to evaluate program progress. DOE included some of these in its fiscal year 2009 budget request and its concept of operations document, but LGP’s director told us the measures and metrics have not been made final because DOE and OMB have not yet agreed on them. In assessing the draft measures and metrics, we observed the following shortcomings: DOE intends to measure outcomes directly tied to overall program goals—installing new capacity, reducing greenhouse gas emissions, and reducing air pollution—and has said it will develop baselines or benchmarks for these outcomes. However, it has not yet gathered and analyzed the necessary data on, for example, existing capacity or current emission levels for categories of LGP project technologies. DOE included a measure for the recovery of administrative costs but not one for the recovery of subsidy costs, which will most likely be the more significant program cost. DOE’s metric to assess the effectiveness of financing decisions— containing the loss rate to 5 percent—may not be realistic; it is far lower than the estimated loss rate of over 25 percent that we calculated using the assumptions included in the fiscal year 2009 president’s budget. The nature and characteristics of the LGP will make estimating the program’s subsidy costs difficult even if DOE develops a sound and comprehensive methodology. Evaluating the risks of individual projects applying for loan guarantees will be difficult because the LGP targets innovative energy technologies and because projects will likely have unique characteristics—varying in size, technology, and experience of the project sponsor. For the first solicitation alone, the technologies range from a modest energy efficiency project to multiyear advanced coal projects, and estimated project costs range from around $25 million to more than $2 billion. In fiscal year 2008, DOE plans to further diversify the types of technology projects that it will consider for its loan portfolio, including nuclear power facilities, whose project costs may be more than $5 billion for each facility. Further, DOE will not gain significant experience in each technology because the program’s objective is to commercialize a limited number of each type of innovative technologies. Therefore, the types of projects will, by design, evolve over time, and the experience and data that DOE gains may not be applicable to evaluating the risks of projects applying in the future. The composition of DOE’s eventual portfolio will even further limit the data available to help DOE evaluate project risks. Unlike an agency that provides a high volume of loan guarantees for relatively similar purposes, such as student loans or home loans, DOE will likely approve a small number of guarantees each year, leaving it with relatively little experience to help inform estimates for the future. In addition, DOE’s loan guarantees will probably be for large dollar amounts, several of which could range from $500 million to more than $1 billion each. As a result, if defaults occur, they will be for large dollar amounts and will likely not take place during easily predicted time frames. Recoveries may be equally difficult to predict and may be affected by the condition of the underlying collateral. In addition, project risks and loan performance could depend heavily on regulatory and legislative actions, as well as future economic conditions, including energy prices and economic growth, which generally can not be predicted accurately. These factors combine to make it difficult for DOE to prepare reliable estimates of subsidy costs. To the extent that DOE underestimates the costs of the LGP and does not collect enough fees from borrowers, taxpayers will ultimately have to pay for any shortfalls. Under FCRA, DOE is required to update, or reestimate, the subsidy costs of LGP to reflect actual loan performance and changes in expected future loan performance. Shortfalls identified in annual reestimates are automatically funded by the federal government under the terms of the FCRA and are not subject to congressional scrutiny during the annual appropriation process. The likelihood of misestimates and the practice of charging fees to cover all the estimated costs may lead to biases in the projects that ultimately receive loan guarantees and tilt the portfolio of loan guarantees toward those that will not pay for themselves. In general, potential borrowers will know more about their projects and creditworthiness than DOE. As a result, borrowers will be more likely to accept loan guarantee offers if they believe DOE has underestimated the projects’ risks and therefore set the fee too low, than if they believe DOE has overestimated risks. Underestimated fees amount to an implicit subsidy. The CBO reported that such a bias in applicants’ acceptance of loan guarantees increases the likelihood that DOE’s loan guarantee portfolio will have more projects for which DOE underestimated the fee. CBO evaluated the cost of the LGP and estimated that DOE would charge companies, on average, at least 1 percent lower than the likely costs of the guarantees. To the extent that DOE underestimates the fee, and does not collect enough fees from borrowers to cover the actual subsidy costs, taxpayers will bear the cost of any shortfall. Even if DOE estimates the subsidy cost with a reasonable degree of accuracy and charges the applicants fees to cover the true costs, there is a potential for a self-selection bias in the companies participating in the program toward those for which the fee is small relative to the expected benefits of the loan guarantee (such as more favorable loan terms or a lower interest rate). As CBO recently reported about the LGP, a loan guarantee would improve a project’s financial viability if the cost of the guarantee is shifted to the federal government. However, when the borrower pays a fee to cover the subsidy cost, as is the case with the LGP, the cost and most of the risk stay with the project and the viability of the project may not be substantially improved. Therefore, for such projects, there is a practical limit to how large the fee can be without jeopardizing the project’s financial prospects; these constraints add to the challenge of setting fees high enough to compensate for uncertainties. To the extent that some projects targeted by Title XVII are not financially viable without some form of federal assistance or favorable treatment by regulators, these projects will not pursue loan guarantees even though they are otherwise eligible. As a result, if this financial viability is not distributed evenly across technologies targeted by Title XVII, the projects that ultimately receive loan guarantees may not represent the full range of technologies targeted by Title XVII. DOE officials noted that the borrower pays option may cause the more risky potential borrowers that would be required to pay a higher fee to either (1) contribute more equity to their projects to lower the fee or (2) abandon their projects and not enter the program. If potential borrowers contribute more equity, this could decrease default risk or improve potential recoveries in the event of a default. More than a year has passed since DOE received funding to administer the LGP and we recommended steps it should take to help manage the program effectively and maintain accountability. We recognize that it takes some time to create a new office and hire staff to implement such a program. However, instead of working to ensure that controls are in place to help ensure the program’s effectiveness and to mitigate risks, DOE has focused its efforts on accelerating program operations. Moreover, because loan guarantee programs generally pose financial risk to the federal government, and this program has additional inherent risks, it is critical that DOE complete basic management and accountability activities to help ensure that it will use taxpayer resources prudently. These include establishing sufficient evaluation criteria and guidance for the selection process, resource estimates, and methods to track costs and measure program progress. Without completing these activities, DOE is hampering its ability to mitigate risks of excessive or unnecessary losses to the federal government and American taxpayers. The difficulties DOE will face in estimating subsidy costs could increase LGP’s financial risk to the taxpayer. If DOE underestimates costs, the likely end result will be projects that do not fully pay for themselves and an obligation to taxpayers to make up the difference. Furthermore, the inherent risks of the program, along with the expectation that borrowers will cover the costs of their loan guarantees, may lead to self-selection bias that tilts the portfolio of projects toward those for which costs have been underestimated. Neither we nor DOE will be able to fully evaluate the extent or magnitude of the potential financial costs to the taxpayer until DOE has developed some experience and expertise in administering the program. Expanding the LGP at this juncture, when the program’s risks and costs are not well understood, could unnecessarily result in significant financial losses to the government. Self-selection bias may also—under certain conditions—lead to less than the full range of projects of technologies targeted by Title XVII represented in the LGP. The likely costs to be borne by taxpayers and the potential for self-selection biases call into question whether the program can fully pay for itself; they also call into question whether the program will be fully effective in promoting the commercialization of a broad range of innovative energy technologies. It is important to note that, while we found that inherent risks and certain features of the program may lead to unintended taxpayer costs and that self-selection biases may reduce the scope of participation in the program, this is not an indication that the overall costs of the program outweigh the benefits. Rather, it simply means that the costs may be higher and the benefits lower than expected. Finally, the extent to which these costs and benefits will differ from expectations over the life of the program is something that cannot be reasonably estimated until DOE gains some experience in administering the LGP. Even at the current planned pace of the program, it will take a number of years before we can observe the extent to which unintended taxpayer costs are incurred or the benefits of innovative energy technologies emerge. To the extent that Congress intends for the program to fully pay for itself, and to help minimize the government’s exposure to financial losses, we are suggesting that Congress may wish to consider limiting the amount of loan guarantee commitments that DOE can make under Title XVII until DOE has put into place adequate management and internal controls. We are also making recommendations to assist DOE in this regard. To improve the implementation of the LGP and to help mitigate risk to the federal government and American taxpayers, we recommend that the Secretary of Energy direct the Chief Financial Officer to take the following steps before substantially reviewing LGP applications: complete detailed internal loan selection policies and procedures that lay out roles and responsibilities and criteria and requirements for conducting and documenting analyses and decision making; clearly define needs for contractor expertise to facilitate timely amend application guidance to include more specificity on the content of independent engineering reports and on the development of project cost estimates to provide the level of detail needed to better assess overall project feasibility; improve the LGP’s full tracking of the program’s administrative costs by developing an approach to track and estimate costs associated with offices that directly and indirectly support the program and including those costs as appropriate in the fees charged to applicants; further develop and define performance measures and metrics to monitor and evaluate program efficiency, effectiveness, and outcomes; and clarify the program’s equity requirements to the 16 companies invited to apply for loan guarantees and in future solicitations. We provided a draft of this report to the Secretary of Energy for review and comment. DOE generally disagreed with our characterization of its progress to date in implementing the LGP. DOE stated two of our six recommendations were inapplicable to the LGP, indicated it has largely accomplished the remaining four recommendations, and disagreed with our matter for congressional consideration. DOE further stated that our report contains flawed logic, significant inaccuracies, and omissions; however, DOE did not provide evidence to support these assertions. Our evaluation of DOE’s comments follows. A more detailed analysis is presented in appendix III. In particular, DOE stated that we placed disproportionate emphasis on activities that should be completed for a fully implemented loan guarantee program rather than one that is currently being implemented, and that we overlooked DOE’s accomplishments to date. We disagree. We believe that our report accurately assesses the LGP in its early development stage and focused our report’s analysis and recommendations on activities that should be completed before DOE begins to substantively review any applications. DOE states that it will have completed many of these activities before it issues loan guarantees, but we continue to believe these activities should be completed before DOE reviews applications and negotiates with applicants so that it can operate the program prudently. In several cases, DOE cites as complete documents and activities that were, and still are at the time of this report, in draft form. For example, in several instances DOE states that it has “implemented” its credit subsidy model. However, as of June 24, 2008, DOE indicated that OMB has not approved its model. Further, DOE illustrates in an updated timetable it provides in its appendix B of its comment letter that a majority of these activities are not yet complete and that several will not be complete until the end of the calendar year 2008. DOE’s entire letter, including its appendixes, is reproduced as appendix III of this report. Regarding our recommendation on policies and procedures for conducting reviews, DOE cites policies and procedures that it believes are adequate for continuing program implementation. We disagree. DOE is developing credit policies and procedures, but it does not have complete internal application policies and procedures, which it should have as it begins to review and negotiate its first loan guarantee applications. DOE also lacks any substantive information in its external application guidance on how it will select technologies. DOE has indicated that some of this information will be included in future solicitations. DOE partially agreed with our recommendation to define the expertise it will need to contract for and stated that it is developing descriptions of necessary contractor expertise on a solicitation-specific basis. Although DOE may plan to complete such descriptions and other preparatory work for future solicitations, DOE did not provide us with any information for contractor expertise for the 2006 solicitation. DOE’s timetable provided in Appendix B indicates an August 2008 completion date for its acquisition strategy and contract vehicles; this target may be in time for future solicitations but it is not in time for the applications that companies are now submitting and DOE is reviewing. DOE also states that it is not possible to develop generic definitions of needed contractor expertise because the department’s needs will vary from solicitation to solicitation. We continue to believe it is both reasonable and feasible for DOE to develop estimates for the timing and type of resources the department will require. To be transparent and consistent in its review and negotiation processes, DOE’s statements of work within sectors and across sectors should have similar frameworks and rationale. Specifically, DOE may need assistance in areas common to all technologies, such as cost and risk analysis, project management, and engineering and design reviews. DOE should be able to start defining these and other areas on the basis of past experience. DOE disagreed with our recommendation to provide more specific application guidance on the content of independent engineering reports. DOE stated that this specificity is not required, necessary, or appropriate for LGP implementation. We disagree. Providing more specificity to companies on DOE’s expectations for an application’s content—and basic information about how it will review the projects—will help companies develop higher quality application materials and help ensure thorough, consistent, and efficient evaluations. Taking this step is also likely to decrease the number of requests for more analyses or information from the applicant. We also continue to believe it is reasonable for DOE to provide more specificity on how to develop project cost estimates, including a level-of-confidence estimate, so that it can better evaluate project cost estimates. DOE disagreed with our recommendation that it track the administrative costs associated with the LGP. DOE stated it is appropriate to track the costs of the LGP office and that it plans to develop a methodology for doing so, but there is no reason to track the costs of certain support activities. We disagree. Title XVII requires DOE to charge and collect fees that the Secretary determines are sufficient to cover applicable administrative expenses. The federal accounting standard for managerial cost accounting requires agencies to determine and report the full costs of government goods and services, including both direct and indirect costs associated with support activities. Therefore, we believe it is appropriate for DOE to consider costs associated with support activities, such as costs associated with the time general counsel staff spend working on issues related to the LGP, to be “applicable administrative costs.” If DOE does not consider support costs when setting fees, it cannot be assured that the fees it collects will fully cover all administrative costs incurred to operate the LGP. Regarding our recommendation to further develop and define performance measures and metrics before substantially reviewing LGP applications, DOE stated it has developed initial draft performance measures and metrics with the aim of completing them by the end of calendar year 2008. We continue to believe such measures and metrics should be developed as soon as possible for the 16 projects DOE invited to apply for guarantees. In addition, DOE has emphasized its focus on selecting technologies and projects that will produce significant environmental benefits, in particular the avoidance of air pollutants and greenhouse gases. However, it is unclear how DOE will do so without gathering data to establish baseline measures and metrics associated with these benefits. DOE stated that it did not need to take additional action to implement our recommendation that it clarify the LGP’s equity requirements with the 16 companies invited to apply and in future solicitations because it informed the 16 companies invited to apply of DOE’s equity position. However, DOE officials told us that they communicated this information orally and did not provide specific documentation to the 16 companies. We believe it is reasonable to provide potential applicants with key information, such as the LGP’s equity requirement, in writing to help ensure that all potential applicants receive the same information. Furthermore, we continue to believe that this is appropriate information to include in future solicitations. In commenting on our matter for congressional consideration, DOE disagreed with our findings that LGP does not have adequate management and internal controls in place to proceed and that it is well on the way to implementing the accepted recommendations contained in our report. We disagree. DOE has been slow to recognize the inefficiencies and inconsistencies it may face in not having key activities, policies, and procedures completed or in place before proceeding with its operations. While it is important that DOE make meaningful progress in accomplishing its mission under Title XVII, it is also important to operate the program prudently, given that billions of taxpayer dollars are at risk. DOE also made minor technical suggestions, which we incorporated as appropriate. DOE’s written comments and our more detailed responses are provided in appendix III. We are sending copies of this report to congressional committees with responsibilities for energy and federal credit issues; the Secretary of Energy; and the Director, Office of Management and Budget. We are also making copies available to others upon request. This report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Frank Rusco at 202-512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. To assess the Department of Energy’s (DOE) progress in issuing final regulations that govern the loan guarantee program (LGP), we reviewed and analyzed relevant provisions of Title XVII of the Energy Policy Act of 2005; the LGP’s August 2006 guidelines and solicitation; its 2007 notice of proposed rulemaking; public comments on the proposed rulemaking; and final regulations published in the Federal Register. We compared the final regulations to applicable requirements contained in Title XVII and OMB Circular A-129 Policies for Federal Credit Programs and Non-Tax Receivables, which prescribes policies and procedures for federal credit programs. We also discussed the final regulations with DOE officials. To assess DOE’s progress in taking actions to help ensure that the program is managed effectively and to maintain accountability, we reviewed documentation related to DOE’s implementation of the LGP. Specifically, we reviewed and analyzed the LGP’s “concept of operations,” technical and financial review criteria for the preapplication process, DOE’s Application Process Overview Guidance, Preapplication Evaluation Procedural Guidance, minutes of Credit Review Board meetings held between April 2007 and February 2008, and other relevant documents. As criteria, we used our Standards for Internal Control in the Federal Government and budget and accounting guidance. Further, to assess DOE’s progress to develop measures and metrics, we applied GAO’s Government Performance and Results Act guidance and analyzed information in Title XVII, DOE’s budget request documents and other relevant documents. When DOE had completed its preapplication review process, we obtained documentation from DOE’s decision files related to the 140 preapplications for 143 projects. We reviewed all decision files DOE provided to us and analyzed the documentation for the preapplications that DOE considered responsive to the August 2006 solicitation to determine if DOE conducted its review process consistently and documented its decisions sufficiently. Responsive decision files generally contained a summary of the technology; separate technical and financial review scoring sheets; minutes documenting results of joint technical- financial meetings; and a DOE summary of its secondary review process. We also reviewed other preapplication materials that DOE provided to us. We did not evaluate the financial or technical soundness of the projects that DOE invited to submit full applications. Further, we interviewed cognizant DOE officials from the LGP office, detailees from the Department of the Treasury, and contractor personnel assisting DOE with the preapplication process, the development of policies and procedures, and the implementation of the program. In addition, we interviewed officials from DOE’s Office of General Counsel; Office of the Chief Financial Officer; and program offices that participated in the technical reviews of the preapplications, including the Office of Energy Efficiency and Renewable Energy, the Office of Fossil Energy, the Office of Nuclear Energy, and the Office of Electricity Delivery and Energy Reliability. We also spoke with officials from the Departments of Agriculture and Transportation to discuss policies and procedures for managing their loan guarantee programs. To examine the inherent risks associated with the LGP, including the “borrower pays” option of Title XVII, we reviewed our prior work on federal loan guarantee programs, including programs under the Maritime Administration, the Federal Housing Administration, and the Small Business Administration. We interviewed officials at and reviewed reports by the Congressional Budget Office. We also discussed risks with DOE officials. We conducted this performance audit from August 2007 through June 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Energy Policy Act of 2005 (EPAct 2005) listed 10 categories of projects that would be eligible to apply for loan guarantees under Title XVII. In August 2006, DOE issued a solicitation inviting companies to submit preapplications for projects eligible to receive loan guarantees under Title XVII. The solicitation listed categories falling within 8 of the 10 Title XVII categories. The solicitation did not invite projects for two Title XVII categories: advanced nuclear energy facilities, and refineries, meaning facilities at which crude oil is refined into gasoline. Table 1 shows the 10 categories. On October 4, 2007, DOE announced that it had invited 16 projects to submit full applications for loan guarantees. Table 2 includes the projects’ sponsors, types, descriptions, and their current proposed locations. The following are GAO’s comments on the Department of Energy’s letter dated June 13, 2008. 1. See “Agency Comments and Our Evaluation,” pages 27-30 of this report. 2. DOE’s comments incorrectly cite GAO’s finding. We specifically refer to DOE’s determination of the type or timing of contractor resources. As we stated in the draft report, LGP’s director told us he has enough resources for reviewing and negotiating the loan guarantee applications related to the 2006 solicitation that companies are submitting. 3. We recognize DOE is in the process of hiring experienced staff. Nevertheless, the nature of the program may not allow DOE to develop significant expertise for any particular technology. 4. DOE has not yet developed final metrics and measures or gathered the data necessary to establish meaningful sector-specific baselines for its 2006 solicitation, from which it formally invited 16 solar, biomass, advanced fossil energy coal, and other projects to apply for loan guarantees. 5. We do not imply that DOE may be biased toward underestimating the subsidy costs of the program. Rather, we point out that the LGP’s inherent risks due to its nature and characteristics could cause DOE to underestimate its subsidy costs and therefore not collect sufficient fees from borrowers. 6. We do not believe that our report creates the impression that DOE could choose not to develop a methodology to calculate the credit subsidy cost. On the contrary, we state that it is critical that DOE develop a sound and comprehensive methodology to estimate subsidy costs because inherent risks due to the nature and characteristics of the program will make estimating subsidy costs difficult. 7. DOE did not provide us with a detailed presentation of the LGP’s credit subsidy model. On several occasions, the LGP director told us that we would be given a detailed presentation once the Office of Management and Budget (OMB) approved the credit subsidy model. As of June 24, 2008, DOE stated that OMB had not approved the model. 8. We believe that our report and the Congressional Budget Office (CBO) report DOE cites adequately explain the rationale for potential biases in applicants’ acceptance of loan guarantees that may increase the likelihood that DOE’s loan portfolio will have more projects for which DOE underestimated the fee. 9. The fiscal year 2009 President’s budget states that the assumptions related to the LGP reflect an illustrative portfolio; that is, the assumptions do not apply to a specific loan. Nevertheless, the 25- percent loss rate assumption from the budget does call into question whether the 5-percent loss rate draft metric DOE established to assess the effectiveness of financing decisions is realistic. 10. We have not inaccurately characterized the operation of the Federal Credit Reform Act of 1990 (FCRA). Instead, we specifically discuss reestimates to explain that even though DOE is proceeding with LGP under the provision that borrowers pay for the subsidy cost of the program, taxpayers will bear the cost of any shortfall, depending on the extent to which DOE underestimates the risks (subsidy cost) and therefore does not collect sufficient fees from borrowers. DOE correctly states that reestimates that increase the subsidy costs are funded by permanent indefinite budget authority, but DOE does not explain that these funds come from taxpayers. Furthermore, because of the nature and characteristics of the program, we believe it is unlikely that the program as a whole will result in savings associated with the subsidy cost because, to the extent that any loans default, the cost of the default will likely be much larger than the fee collected. Lastly, we did not discuss modifications under FCRA because DOE has not completed its policies and procedures on estimating subsidy costs. We would expect one component of these policies and procedures to explain how DOE will identify, estimate the cost of, and fund modifications. 11. If a project defaults, the cost of the default will likely be greater than the fee collected, thus creating a shortfall. Under FCRA, this shortfall would be identified during the reestimate process and would ultimately be subsidized by taxpayers. 12. OMB Circular A-11, Preparation, Submission and Execution of the Budget, describes the budgetary treatment for credit programs under FCRA requirements. While DOE explains that the financing accounting is nonbudgetary (its transactions are excluded from the budget totals), DOE fails to explain the sources of the financing account funds. According to OMB Circular A-11, “an upward reestimate indicates that insufficient funds had been paid to the financing account, so the increase is paid from the program account to the financing account to make it whole.” The program account is a budgetary account, and its transactions do affect the deficit and may require Treasury to borrow from the public. 13. We recognize that DOE plans to take steps to assess risk and develop mitigation strategies; however, we continue to believe that the nature and characteristics of the LGP result in certain inherent risks that, by definition, DOE is unlikely to be able to mitigate or accurately quantify. As a result, there are likely to be many cases in which the risks will not be covered by the borrower fee or a risk reserve. In addition, even in instances where DOE’s estimates of subsidy costs are reasonably accurate, the “borrower pays” option may cause some potential borrowers to not pursue loan guarantees because the fee is too high relative to the benefits to the borrower of the loan guarantee. 14. As stated in the report, the inherent risks of the program, along with the expectation that borrowers will cover the costs of their loan guarantees, may lead to self-selection bias that tilts the portfolio of projects toward those for which costs have been underestimated. To the extent that some projects targeted by Title XVII are not financially viable without some form of federal assistance or favorable treatment by regulators, these projects will not pursue loan guarantees even though they are otherwise eligible. As a result, if this financial viability is not distributed evenly across technologies targeted by Title XVII, the projects that ultimately receive loan guarantees may not represent the full range of technologies targeted by Title XVII. 15. We changed “clearly” to “sufficiently.” We distinguish between the technical and financial reviews that staff conducted, and the rational and clarity of documentation that management provided for its decision-making processes. We observed from our file review that, when preapplications contained sufficient information, reviewers applied the criteria LGP provided, and in some cases applied additional criteria in their assessments. These assessments were specific to the preapplication process, not the application process. At times the preapplications lacked meaningful information for reviewers to assess. The cases we highlight in our report are those in which the LGP office did not provide sufficient justification for inviting projects. GAO welcomes the LGP’s office efforts to establish formal standards and procedures. In recommending that LGP complete its measures and metrics associated with achieving benefits and employing new and significantly improved technologies, we believe this effort will also help inform future selection processes. 16. DOE did not require preapplications to include proforma “financial statements.” Rather, preapplicants were required to submit financing plans, estimated project costs, and a financial model detailing the projected cash flows over the life cycle of the project. We believe that audited financial statements and credit ratings would be more useful in assessing creditworthiness. In addition, when evaluating preapplications, DOE did not combine technical and financial scores. Therefore, it is accurate to state that creditworthiness comprised 60 percent of the preapplicant’s financial score. 17. DOE erroneously refers to the preapplication process here. This analysis on project evaluation is specific to our discussion of project eligibility, and DOE’s use of external guidance as a proxy for internal policies and procedures for applications. 18. The statement DOE cites is in context with the prior sentence, “While DOE recognizes these reports serve an important due diligence function, DOE has not provided applicants with specific instructions on what to include.” This sentence is also prefaced with “as a result” in the draft report. We changed the word “underwriting” to “evaluating” and added “applications” after “loan guarantees” to clarify our statement. 19. We generally agreed with the consultant’s finding. Specifically, we found that DOE program offices used Credit Review Board-approved criteria as well as other criteria. In one case, these criteria were appropriate to differentiate projects in accordance with Title XVII. We could not fully determine whether the use of these additional criteria had any impact on the selection process. 20. See also comment 17. DOE’s response does not address our report’s analysis; specifically, we are referring to DOE’s application guidance. In addition, while DOE’s final rule states what applicants should submit, it and the application guidance do not indicate how DOE will evaluate these submissions. 21. Federal loan guarantees do help borrowers obtain more favorable terms than they may otherwise obtain. For example, a borrower may be able to get a lower interest rate, an extended grace period, or a longer repayment period when the loan is guaranteed by the federal government. 22. For clarification, we revised the report to indicate that DOE needs to “identify eligible lenders.” 23. For clarification, we incorporated DOE’s suggested revision. 24. We revised the report to reflect this update of information. 25. We revised the report to state “According to DOE, as of May 2008, DOE has hired one staff person to develop credit policies and procedures specific to LGP, and to fully establish its credit policy function.” In addition to the individuals named above, Marcia Carlsen and Karla Springer, Assistant Directors; Abe Dymond; Richard Eiserman; Jeanette M. Franzel; Carol Henn; Jason Kirwan; Kristen Kociolek; Steve Koons; Sarah J. Lynch; Tom McCool; Madhav Panwar; Mehrunisa Qayyum; Carol Herrnstadt Shulman; Emily C. Wold; and Barbara Timmerman made key contributions to this report.
Title XVII of the Energy Policy Act of 2005 established DOE's loan guarantee program (LGP) for innovative energy projects that should decrease air pollutants or greenhouse gases and that have a reasonable prospect of repayment. For fiscal years 2008 and 2009, Congress authorized the use of borrower fees to pay the costs of loan guarantees through Title XVII's "borrower pays" option, under which DOE will limit loan guarantees to $38.5 billion. Congress mandated that GAO review DOE's progress in implementing the LGP. GAO assessed DOE's progress in (1) issuing final regulations and (2) taking actions to help ensure that the program is managed effectively and to maintain accountability. GAO also assessed how inherent risks due to the nature of the LGP may affect DOE's ability to achieve intended program outcomes. GAO analyzed DOE's regulations, guidance, and program documents and files; reviewed Title XVII; and interviewed DOE officials. In October 2007, DOE issued regulations that govern the LGP and include requirements for application submissions, project evaluation factors, and lender eligibility and servicing requirements. The regulations also generally address requirements set forth in applicable guidance. Some key aspects of the initial LGP guidelines were revised in the regulations to help make the program more attractive to lenders and potentially reduce financing costs for projects. For example, the maximum loan guarantee percentage increased from 80 to 100 percent of the loan. In addition, the regulations define equity as "cash contributed by the borrowers," but DOE officials told us they also plan to consider certain non-cash contributions, such as land, as equity. As a result, applicants may not fully understand the program's equity requirements. DOE is not well positioned to manage the LGP effectively and maintain accountability because it has not completed a number of key management and internal control activities. As a result, DOE may not be able to process applications efficiently and effectively, although it has begun to do so. DOE has not sufficiently determined the resources it will need or completed detailed policies, criteria, and procedures for evaluating applications, identifying eligible lenders, monitoring loans and lenders, estimating program costs, or accounting for the program--key steps that GAO recommended DOE take over a year ago. DOE also has not established key measures to use in evaluating program progress. Risks inherent to the LGP will make it difficult for DOE to estimate subsidy costs, which could lead to financial losses and may introduce biases in the projects that receive guarantees. The nature and characteristics of the LGP and uncertain future economic conditions increase the difficulty in estimating the LGP's subsidy costs. Because the LGP targets innovative technologies and the projects will have unique characteristics--varying in size, technology, and experience of the project sponsor--evaluating the risks of individual projects will be complicated and could result in misestimates. The likelihood that DOE will misestimate costs, along with the practice of charging fees to cover the estimated costs, may lead to biases in the projects that receive guarantees. Borrowers who believe DOE has underestimated costs and has consequently set fees that are less than the risks of the projects are the most likely to accept guarantees. To the extent that DOE underestimates the costs and does not collect sufficient fees from borrowers to cover the full costs, taxpayers will ultimately bear the costs of shortfalls. Even if DOE's estimates of subsidy costs are reasonably accurate, some borrowers may not pursue a guarantee because they perceive the fee to be too high relative to the benefits of the guarantee, affecting the project's financial viability. To the extent that this financial viability is not distributed evenly across the technologies targeted by Title XVII, projects in DOE's portfolio may not represent the range of technologies targeted by the program.
In 1862, the Army Surgeon General established a repository in the Army Medical Museum for disease specimens collected from Civil War soldiers. The Army Institute of Pathology was created as a part of the museum in 1944, using the museum’s extensive collection of disease specimens to develop expertise in diagnostic pathology. In 1949, the Army Institute of Pathology was renamed the Armed Forces Institute of Pathology, and the museum became a unit within AFIP. In 1976, the Department of Defense Appropriation Authorization Act for Fiscal Year 1977 established AFIP in its current form, as a joint entity of the Departments of the Army, Navy, and Air Force, to offer pathologic support to military and civilian medicine in consultation, education, and research. Throughout the early part of the 20th century, AFIP was the only institution in the country that maintained expertise in every major area of anatomical pathology, attracting large numbers of consultations, trainees, and research grants on the basis of the institute’s unique reputation. However, according to AFIP’s Scientific Advisory Board, many changes in modern medical practice over the last several decades have altered the environment in which AFIP operates. For example, AFIP must now compete with over one hundred civilian medical institutions, many of which have in-house experts and comparable subspecialty areas of pathology. AFIP provides pathology expertise for all branches of the military. AFIP also provides pathology expertise for VA in exchange for a specified number of VA staff positions assigned to AFIP. Additionally, AFIP offers pathology expertise on a reimbursable basis for its civilian customers. To assist AFIP in this part of its mission, the Department of Defense Appropriation Authorization Act for Fiscal Year 1977 authorized ARP to be established as a nonprofit corporation with responsibility for encouraging and facilitating collaborative work between AFIP and civilian medicine. As such, ARP enters into contracts, collects fees, and accepts research grants on behalf of AFIP, in support of cooperative enterprises and interchange between military and civilian pathology. From 1998 through 2006, DOD and others conducted reviews that concluded that AFIP lacked controls over its financial operations, provided services for the civilian medical community without adequate reimbursement, and the costs of the services it provided to VA exceeded the value of the paid staff positions VA provided in exchange. These reviews concluded that DOD, in effect, subsidized AFIP’s work for VA and civilian customers. In response to these concerns, AFIP began making changes to its operations in 2000, including the development and implementation of a business plan meant to increase AFIP’s revenue and reduce DOD’s level of funding to AFIP. DOD examined AFIP’s operations as part of the 2005 BRAC process, which was intended to find ways to consolidate, realign, or find alternative uses for current facilities given the U.S. military’s limited resources. In making its 2005 BRAC recommendations, DOD applied statutory selection criteria that included military value, costs and savings, economic impact to local communities, community support infrastructure, and environmental impact. In applying these criteria, the law required that priority consideration be given to military value, and allowed the other criteria to be considered to a lesser extent. In DOD’s evaluation, AFIP received a low military value due to its large portion of civilian-related work. Therefore, DOD recommended disestablishing AFIP by relocating critical military services and terminating civilian-related activities currently provided by AFIP. As part of the BRAC process, the Secretary of Defense issued a report containing his realignment and closure recommendations, which were then reviewed by the BRAC Commission. The 2005 BRAC Commission’s final report contained recommendations to disestablish AFIP and relocate certain services that AFIP provides. These recommendations became binding as of November 9, 2005. In accordance with BRAC statutory authority, DOD must complete closure and realignment actions by September 15, 2011. AFIP pathologists perform diagnostic consultations, education, and research services benefiting DOD, VA, and civilian communities. In 2006, AFIP provided over 40,000 consultations, almost half of which were for DOD physicians. AFIP’s educational services include live courses, distance learning activities, and texts that draw upon pathology material from the repository with the goal of training physicians in diagnosing the most difficult-to-diagnose diseases. DOD, VA, and civilian physicians use AFIP’s educational services, but the civilian community uses AFIP’s educational services more extensively than military physicians. Regarding its research services, AFIP pathologists work individually and in partnership with other federal and private researchers using material from the repository to conduct research applicable to military operations as well as to diagnose and treat diseases affecting military and civilian health. AFIP’s primary mission is to provide diagnostic consultations. Its pathologists spend nearly twice as much time providing this service as they do providing education and research services. AFIP pathologists provide consultations for cases referred to them with and without diagnoses. That is, when physicians—clinicians or general pathologists— at civilian, DOD, or VA medical centers cannot make a diagnosis or when they are unsure of their initial diagnosis and are in need of another opinion, they can send the case to AFIP’s subspecialty pathologists for diagnostic consultation. According to the American Board of Pathology, there are 10 different areas of subspecialty pathology, such as dermatopathology and forensic pathology. Additionally, pathologists are recognized as subspecialists in other areas of pathology pertaining to particular cancers, such as breast or prostate. Requesting physicians— those who send cases to AFIP in search of diagnostic consultations— typically need consultations for more complex cases that require the additional expertise of a subspecialty pathologist. In the course of providing these diagnostic consultations to the requesting physicians, AFIP receives and is able to add pathology material to its repository. As a result, consultations have been instrumental in expanding the repository. Over time, AFIP has increased the amount of services provided for DOD and decreased the amount of services provided for civilians. The total number of diagnostic consultations that AFIP provided remained relatively stable from 2000 to 2004. However, as we previously reported, DOD diagnostic consultations provided by AFIP increased by 30 percent from 2000 through 2004, while its civilian consultations decreased by 28 percent. We also reported that nearly all of the decrease in civilian consultations occurred in the 2 years after AFIP announced that it would raise its consultation fees beginning in January 2003. According to AFIP and civilian pathologists, this decrease in civilian diagnostic consultations was also attributed to a more competitive marketplace for obtaining consultations. Additionally, these pathologists also cited the loss of nationally recognized experts at AFIP as another possible reason for the decline in the number of civilian diagnostic consultations being sent to AFIP. In 2006, AFIP provided almost half of its consultations to DOD physicians. From 2005 to 2006, AFIP decreased the total number of consultations it provided from 44,169 to 41,582. Consistent with earlier trends from 2000 to 2004, AFIP continued to increase the number and percentage of consultations provided to DOD and decrease the amount provided to the civilian community from 2005 to 2006. (See table 1.) In 2006, the largest percentage of consultations, approximately 48 percent, was conducted for DOD, followed by those for VA and civilian physicians at nearly 27 percent and 25 percent, respectively. AFIP also provided about 1 percent of its consultations for others, which included other federal agencies and foreign military services. While AFIP receives consultation requests from all over the world, consultations are heavily concentrated from more populous states and the East Coast. (See app. II for maps of AFIP’s 2006 consultations.) In 2006, about 62 percent (25,621) of AFIP’s cases were for consultations where AFIP pathologists reviewed the initial diagnoses from DOD, VA, civilian, or other physicians for confirmation or change. For these cases, AFIP pathologists changed the initial diagnoses from requesting physicians in 10,987 cases, or about 43 percent of the time. For the remaining 57 percent of the cases (14,634), AFIP confirmed the requesting physicians’ initial diagnoses. When AFIP’s diagnoses differ from the requesting physicians’ initial diagnoses, it classifies the changes as either minor or major. According to AFIP, a minor change often involves a change in severity of the condition diagnosed or the choice of appropriate therapy. For example, the initial diagnosis may have correctly identified a tumor as malignant but may have assigned an incorrect type or level of aggressiveness, which could affect treatment and prognosis. In addition, AFIP classifies a change as major if it involves a change in the nature of the condition diagnosed. For example, a major change would include changing a diagnosis from malignant to benign. Both minor and major diagnosis changes can lead to a different treatment and, ultimately, a different outcome for the patient. As shown in table 2, most of AFIP’s changes to initial diagnoses that were provided by requesting physicians were classified by AFIP as minor changes. The type of consultations DOD, VA, and civilian physicians seek from AFIP differ somewhat, both in terms of the number of cases sent without a diagnoses and the type of pathology expertise requested. For example, 47 percent of DOD’s consultation requests were sent without an initial diagnosis, compared to 27 percent from VA and 31 percent from civilian physicians. This may be due, in part, to the type of expertise DOD and civilian physicians most commonly need, which also differs. For example, in 2006, almost a quarter of all DOD consultations were in the area of forensic toxicology, which includes examining material from autopsies and testing biological specimens for alcohol and drugs. However, VA physicians most frequently requested AFIP’s environmental toxicology diagnostic consultations, while civilian physicians most frequently requested hepatic consultations—involving diseases of the liver—as well as gastrointestinal consultations. The other consultation service most frequently requested by DOD, VA, and civilian pathologists was for dermatopathology—or the interpretation of skin biopsies. AFIP, in conjunction with ARP, offers a variety of courses, conferences, and other educational services, generally for physicians, and tailors its curriculum to the most common as well as the most difficult-to-diagnose diseases. AFIP staff design and conduct live and distance learning courses that aid physicians in expanding their medical knowledge as well as fulfilling their state licensure requirements for CME credit. AFIP’s educational services cover a range of topics in the fields of pathology, radiology, and veterinary pathology, with particular emphasis on identifying emerging diseases, offering new insights into known diseases, and giving hands-on experience in diagnosing difficult cases. In developing material for conferences, courses, and texts, AFIP staff query a database of recent consultations searching for the most common missed diagnoses—that is, those cases in which the requesting physician misdiagnosed the case, as well as diagnoses in which the requesting physician most frequently did not make an initial diagnosis. In 2006, AFIP, in conjunction with ARP, offered 28 formal courses, 24 video teleconferences, and 4 Web-based courses. These courses qualify for CME credit, which assists DOD, VA, and civilian pathologists and other physicians in fulfilling state requirements for maintaining their medical licenses. Civilian physicians use AFIP’s training services more extensively than DOD and VA physicians. In 2006, 61 percent of the students attending AFIP’s CME courses were civilians, 34 percent were DOD attendees, and 5 percent were from VA. Most live CME courses are attended predominantly by civilians. For example, in 2006, 96 percent of the residents who attended the Radiologic-Pathologic Correlation course were civilians. However, some courses are solely attended by military health professionals because they involve issues specific to DOD or because AFIP does not allow civilians to attend classes such as its Air Force Medical Forensic Sustainment course. Overall, AFIP’s courses have attracted instructors and students from around the world. In 2006, individuals representing over 70 institutions, including the Federal Bureau of Investigation, the National Institutes of Health, private academic institutions and medical centers, and MTFs participated in AFIP’s CME program. According to military pathologists, AFIP’s distance learning programs are a convenient and economical way to obtain CME requirements and fulfill state licensure requirements. AFIP’s distance learning programs include AskAFIP, an online database maintained and operated by AFIP. To hone diagnostic skills, AskAFIP allows users to query a database that contains information from AFIP’s collection of specific diagnoses, texts, case materials, and images from the repository. DOD, VA, and civilian physicians have access to AskAFIP. Also, as part of its distance learning educational services, AFIP’s pathologists review diagnoses provided by VA pathologists—known as the Systematic External Review of Surgicals program. In addition to offering courses, in conjunction with ARP, AFIP publishes examples of clinical-pathologic correlations, which describe the relationships that exist between the clinical symptoms or attributes exhibited by a patient and the pathological abnormalities of a specific disease or type of tumor. These correlations are published in texts called fascicles, which DOD, VA, and civilian pathologists told us are a primary reference source and serve as an important, frequently used tool as they practice pathology. The fascicles are updated to capture the more recent developments in pathology. The combination of unique case material and expertise of AFIP pathologists facilitates AFIP’s research that benefits DOD, VA, and civilian medicine and results in hundreds of publications each year. Research is conducted by AFIP pathologists, as well as by other federal and private researchers in collaboration with AFIP pathologists, primarily using material from the repository. All outside researchers are required to collaborate with an AFIP pathologist in order to access AFIP’s materials. The repository contains over 3 million disease specimens and their accompanying case histories dating back over 150 years. Because of the large volume of cases in the repository, researchers can conduct studies of considerable sample size. Since AFIP receives pathology material for many difficult-to-diagnose diseases, the repository contains complex and uncommon cases that have accumulated over time. Studying these samples allows for advances in diagnosis and treatment of diseases. For example, AFIP has accumulated a large collection of gastrointestinal stromal tumors, a relatively uncommon tumor. Recent studies involving this collection have led to advances in the identification of, and therapy for, this tumor. One of the responsibilities of AFIP pathologists is to classify the material that AFIP receives into the repository so that researchers can access it in the future. As medical knowledge evolves, AFIP pathologists reclassify material in the repository to better characterize it for future use. AFIP staff are also in the process of putting material from the repository in digital form to expand its use for research. AFIP conducts and collaborates on research applicable to military operations and general medicine, so its research affects DOD, VA, and civilian communities. Although “militarily relevant” research has not been well-defined, AFIP staff said it generally includes subjects of direct interest to the military. For example, according to AFIP staff, research conducted in collaboration with the Armed Forces Medical Examiner has led to developments such as improved body armor and acute care of wounded personnel. Further, AFIP conducts and collaborates on infectious disease and cancer research, which has applicability for the civilian community as well. AFIP’s infectious disease research has focused on the characterization of potentially epidemic organisms, such as severe acute respiratory syndrome, as well as on the development of improved vaccines and the detection of biologic toxins, such as those that may be used in biological warfare. AFIP’s cancer research, including breast, gynecologic, and prostate cancers, has resulted in more accurate diagnosis and development of better treatment methods. Table 3 provides examples of AFIP’s research projects, including their impact. The research conducted at AFIP results in hundreds of publications per year, but it has been declining. For example, in 2005 researchers at AFIP published 174 peer-reviewed articles and 121 abstracts, and in 2006 researchers at AFIP published 145 peer-reviewed articles and 73 abstracts. In a previous GAO report, we found that from 2000 through 2004, the number of research protocols at AFIP declined from 371 to 296. AFIP staff said that they began to focus on increasing militarily relevant research and reducing DOD-funded civilian-focus research as early as 2001. The 2005 BRAC provision specifies that AFIP be disestablished. Accordingly, most services currently provided by AFIP will be terminated and other services will be relocated or outsourced. Specifically: DOD plans to outsource second-opinion consultations and some initial diagnostic consultations to the private sector through a newly established PMO. With the exception of two educational courses, DOD does not plan to retain and relocate the educational programs currently offered by AFIP. DOD plans to halt AFIP’s research and realign the repository, which is AFIP’s primary research resource, to the Forest Glen Annex, Maryland, under the management of USUHS. The BRAC provision allows DOD the flexibility to retain capabilities that were not specifically addressed in the provision. In accordance with this statutory authority, the ASD(HA) has retained four additional AFIP services and is considering whether to retain six others. According to DOD’s most recently developed implementation plan, dated February 2007, DOD had planned to begin implementation of the BRAC provision relating to AFIP in July 2007 and to complete action by September 2011. However, a provision from the 2007 supplemental appropriations act prevents DOD from reorganizing or relocating any AFIP functions until after DOD has submitted detailed plans and timetables for the proposed reorganization and relocation to Congress. Once the reorganization plan has been submitted, DOD can resume reorganizing and relocating AFIP. DOD plans to terminate AFIP’s provision of diagnostic consultations and outsource certain DOD diagnostic consultations to the private sector through a newly established PMO. More specifically, the BRAC provision requires that the PMO be established at the new Walter Reed National Military Medical Center in Bethesda, Maryland, to coordinate pathology results, contract administration, quality assurance, and control of DOD second-opinion consults worldwide. DOD plans to relocate sufficient personnel from AFIP to the new PMO to conduct its activities. Further, DOD’s justification for this provision states that DOD will also rely on the civilian market for providing initial diagnoses when the local pathology labs’ capabilities are exceeded. In determining the legal implications of the BRAC provision with respect to consultation services, DOD’s Office of General Counsel concluded that military second-opinion consultations as currently provided by AFIP would not be subject for retention because the PMO would be required to outsource these consultations. Initial diagnoses would either be provided by military pathologists or possibly military subspecialty pathologists at MTFs when possible or outsourced through the PMO. Although the PMO would not coordinate civilian diagnostic consultations, DOD has not determined whether it would allow VA or other federal agencies to obtain diagnostic consultations—either initial or second-opinion—through the PMO. The PMO working group, including DOD and VA officials, met in August 2007 to discuss the establishment of the PMO. Regarding the retention of educational services, DOD does not plan to relocate any educational services currently offered by AFIP with the exception of the enlisted histology technician training and the DOD Veterinary Pathology Residency Program. The BRAC provision requires DOD to relocate the enlisted histology technician training to Fort Sam Houston, Texas. The DOD Veterinary Pathology Residency Program would be relocated to Forest Glen Annex, Maryland. With respect to the research, DOD plans to realign the repository, which is AFIP’s primary research resource, to Forest Glen Annex, Maryland, to be managed by USUHS. USUHS issued a Request for Proposal in May 2007, for the purpose of contracting for a review of the quality of the pathology material and associated case records contained in the repository. USUHS officials told us that they will make further decisions regarding laboratory and storage facility requirements for the repository, as well as plans for staffing and research uses, when the evaluation is complete. Pending the outcome of this review, USUHS may employ 10-12 pathologists who would spend the majority of their time on research; these pathologists would also be responsible for classifying pathology material in the repository. Aside from the AFIP services discussed above, the BRAC provision required that some of AFIP’s other services be retained by DOD and relocated into other facilities. For example, the provision requires relocating Legal Medicine to the Walter Reed National Military Medical Center in Bethesda, Maryland, and the relocation of the Armed Forces Medical Examiner, DNA (deoxyribonucleic acid) Registry, and Accident Investigation to Dover Air Force Base, Delaware. As part of its review regarding the disestablishment of AFIP, the BRAC Commission found that the medical professional community regarded AFIP and its services as integral to the military and civilian medical and research community. The commission also found that DOD substantially deviated from its selection criteria by failing to sufficiently address several AFIP functions. As a result, the commission amended DOD’s initial recommendation to add that AFIP capabilities not specified in the final recommendation would be absorbed into other DOD, federal, or civilian facilities, as necessary. The revised language was approved by the President as part of the final BRAC provision. As revised, DOD has the flexibility to review AFIP capabilities or services not specifically addressed in the BRAC provision to determine which functions to retain. As a result of the amendment, the ASD(HA) informed key DOD officials in a November 16, 2006, memorandum that he had approved the retention of four services—the DOD Veterinary Pathology Residency Program, Automated Central Tumor Registry, Center for Clinical Laboratory Medicine, and Patient Safety Center. He also informed them that the remaining AFIP services would be disestablished unless any of the key officials identified the need to retain specific services. Based on responses from the key officials, an additional six AFIP services were recommended for retention. As of September 2007, the ASD(HA) had not made a final decision on them. These six services include diagnostic telepathology, two biodefense projects, reserve biological select agent inventory, depleted uranium (DU) testing, and cystic fibrosis testing. In addition, VA expressed an interest in having DOD retain the DU testing capability. Table 4 summarizes AFIP services that will be relocated or established as specified in the BRAC provision, those that were subsequently added by the ASD(HA) to be retained, and those that were recommended for retention by the DOD officials and are awaiting final decision. (See app. III for a description of services currently performed by AFIP that are to be retained and relocated, or newly established, or are awaiting final decisions.) According to DOD’s most recently developed implementation plan, execution of the BRAC provision regarding AFIP was scheduled to begin in July 2007 and be complete by September 2011. Figure 1 summarizes DOD’s plans to terminate AFIP’s three key services by December 2010. It also illustrates DOD’s timeline that would have relocated other AFIP services that were designated to be retained by the BRAC provision. Several rounds of staff reductions were anticipated to occur as DOD terminated or relocated AFIP services. As figure 1 shows, DOD’s plans left a lag time between when AFIP DOD diagnostic consultations ended in December 2010 and when the PMO was expected to be operational in September 2011. Implementation of these plans were put on hold by the requirements of section 3702 of the fiscal year 2007 supplemental appropriations act, which suspended all BRAC actions affecting AFIP until after DOD submits detailed plans to the House and Senate Appropriations and Armed Services Committees, which are due by December 31, 2007. DOD officials acknowledge that the timeline as envisioned in their February 2007 implementation plan can no longer be met and the full amount of onetime savings from disestablishment of AFIP will not be realized, although they believe that they may still be able to complete all actions required by the BRAC provision by 2011. While DOD is required to share more information regarding its plans with Congress before the end of the year, other developments could impact the implementation of those plans. Specifically, on May 17, 2007, the House passed H.R. 1585, a bill for the National Defense Authorization Act for Fiscal Year 2008, which contains a provision that would require DOD to establish a “Joint Pathology Center” at the National Naval Medical Center in Bethesda. On October 1, 2007, the Senate passed its version of the same bill. However, the Senate-passed version contains a provision that would authorize, rather than require, DOD to establish a Joint Pathology Center at Bethesda, “to the extent consistent with the final recommendations of the 2005 Commission as approved by the President.” If a new Center is established under either provision, it would be required to provide diagnostic pathology consultation, pathology education, and diagnostic pathology research. In addition, the Senate bill would require that the Center, if established, provide maintenance and continued modernization of the tissue repository. As of the publication of this report, the House and Senate had not reached agreement at conference on any provision related to a new Joint Pathology Center. Although AFIP is a noted center for pathology expertise, closing AFIP may have minimal effect on DOD, VA, and civilian communities because pathology services are available to them elsewhere. However, a smooth transition depends on DOD’s actions to address key challenges involved in developing new approaches to obtaining subspecialty pathology consultations and managing the repository to facilitate its use for research. DOD and VA officials have begun to identify the challenges, but have not decided upon strategies to address them. In large part, DOD, VA, and civilian pathologists may be able to obtain services elsewhere to replace those currently provided by AFIP. Diagnostic consultations: Other medical institutions currently provide diagnostic consultations that require subspecialty expertise. For example, Massachusetts General Hospital (Boston, Massachusetts) and M. D. Anderson Cancer Center (Houston, Texas) each provide about 60,000 or more pathology consultations per year. While AFIP has many different subspecialty areas, major civilian medical institutions, such as The Johns Hopkins Hospital (Baltimore, Maryland) and Memorial Sloan-Kettering Cancer Center (New York, New York) have from 10 to 17 different subspecialty areas, respectively. Pathologists we interviewed emphasized the importance of being able to obtain consultations from expert pathologists, wherever they may work. They also stated that pathologists with particular expertise who move from AFIP to the private sector may be able to continue to provide consultations from whichever institutions they may join. Most DOD and VA pathologists noted that even though MTFs and VA medical centers can readily access AFIP consultations without incurring additional fees, they already use subspecialty pathologists from civilian medical institutions on occasion for consultations due to their needs for particular subspecialty expertise and concerns about obtaining a diagnosis in a timely manner. In addition, some MTFs have subspecialty pathologists who can provide consultations for other military physicians. For example, Brooke Army Medical Center and Wilford Hall Medical Center—both located in San Antonio, Texas—each have over seven different subspecialty areas. According to pathologists from the five MTFs we interviewed, subspecialty pathologists from their centers currently provide consultations to other nearby MTFs. Pathology education: Other institutions also provide pathology education. For example, CAP offers educational courses covering a range of topics such as histotechnology and molecular pathology. DOD, VA, and civilian pathologists that we interviewed told us that they have fulfilled CME requirements through other institutions and could continue to do so. Pathologists we interviewed said that DOD and VA pathologists generally make independent decisions about which classes to attend and how to meet accreditation requirements. Military pathologists we interviewed also said that due to limited budgets, pathologists generally do not travel to AFIP to attend courses because other pathology organizations, such as CAP, offer CMEs that are accessible without the need to travel. Most DOD, VA, and civilian pathologists we interviewed said that AFIP’s Radiologic- Pathologic Correlation course is unique and valuable to the radiology profession. Some of the pathologists we interviewed said that this is because the course utilizes the expertise of physicians who work with pathology material from a large volume of difficult-to-diagnose cases, requires attendees to bring unique specimens for class analysis and discussion, and utilizes material from AFIP’s repository, which houses a comprehensive collection of specimens. Further, many pathologists and representatives from radiology organizations told us that it is the most common way radiology residents fulfill a requirement to have specific training in pathology. Although the course is recognized as being unique, according to guidance set forth by the Accreditation Council for Graduate Medical Education, radiologists could fulfill their accreditation requirements through avenues other than AFIP. In addition, according to DOD officials, it is not DOD’s mission to train civilian radiology residents, although we believe that DOD could be in a position to assist outside groups if any expressed interest in becoming responsible for maintaining the course. Research services: The type of research historically conducted by AFIP could be conducted at other institutions or by pathologists who remain with DOD. USUHS will continue to perform militarily relevant, biomedical research, focusing on health promotion and disease prevention, as it gains responsibility for the repository—AFIP’s primary research tool. Additionally, the Office of the Armed Forces Medical Examiner has also been responsible for conducting research applicable to military operations. Because it is being retained, it could continue to do so. Also, AFIP has partnered with other government, academic, and private sector institutions to carry out research services. Specifically, AFIP staff have conducted research affecting general medicine through collaborations with external organizations, such as The Johns Hopkins Hospital and the Mayo Clinic. These organizations will likely continue to fund medical research and could possibly continue to conduct research using pathology material from the repository. Although USUHS has not finalized its plans regarding the repository, its intent is to make the pathology material accessible to others including civilian researchers, to the extent it is approved by DOD, practicable, and legally feasible. Given that AFIP is a central source that provides its customers with definitive consults on the most difficult-to-diagnose cases, DOD and VA pathologists face challenges in obtaining similar consultative expertise once AFIP is disestablished. These challenges include determining how to effectively use existing subspecialty pathology resources, obtain outside expertise, and ensure coordination and funding of services to encourage efficiency while avoiding disincentives to quality care. In addition, DOD must decide whether VA could obtain consultation services through the PMO and whether VA will be able to provide some subspecialty pathology expertise for DOD. While DOD and VA officials have begun the process to identify these challenges, as of mid-August 2007, they had not yet developed management strategies to mitigate them. Effective utilization of existing resources: While DOD officials told us that they might be able to perform some in-house diagnostic consultations for MTFs, they have not evaluated their existing medical resources to determine the extent to which such consultation services can be performed. According to DOD officials, some large MTFs have subspecialty expertise and might be able to absorb some of the demand for consultations, but DOD has not identified the potential volume and type of consultations that these large MTFs could absorb. Further, DOD pathologists expressed concerns that MTFs would not be able to absorb many additional consultations without increasing the number of subspecialty pathologists staffed at MTFs. This could be challenging, they said, because it is difficult to retain pathologists within the military. Because DOD is retaining some of its pathology capabilities from AFIP under the BRAC provision, such as the Armed Forces Medical Examiner, it will continue to have expertise available to provide services in the area of forensic toxicology—DOD’s most frequently used consultation service in 2006. Further, several DOD officials were concerned that the DOD General Counsel’s interpretation of the BRAC provision requiring outsourcing through the PMO would preclude DOD from providing second-opinion consultations from expertise within its MTFs. In addition, although VA may be able to absorb some of its own consultations using its subspecialty pathologists, including those who are currently assigned to AFIP, VA pathologists told us that VA is limited in how many additional consultations its current subspecialty pathologists could provide. The PMO process: How the PMO functions and obtains diagnostic services from medical centers outside DOD and VA has important implications, both from a quality of care and a cost standpoint. DOD and VA officials we interviewed indicated that DOD faces challenges in developing the new PMO that can outsource for quality pathology services; such challenges involve issues related to the timeliness of consultations and the ability to obtain appropriate expertise at a reasonable cost. As of August 2007, DOD has not formulated its management strategies for addressing the following issues concerning how the PMO will function. Assisting other federal agencies with obtaining consultations. Although DOD has discussed the possibility that the PMO could include VA in outsourced diagnostic consultations, no decisions had been made as of mid-August 2007. Since VA has received over a quarter of AFIP’s total consultations, VA officials have expressed an interest in continuing to receive consultations through the PMO once DOD discontinues offering AFIP consultations. VA officials also expressed concerns about the cost of obtaining consultations outside of AFIP, which they estimated to be much greater than the financial support it currently provides to AFIP for its services. In addition, the officials stated that AFIP has been responsible for VA’s DU program, and as of June 2007, VA officials were uncertain about the extent to which staff and equipment providing these services would be sufficient to meet the future needs. VA officials stated that their agency did not have the equipment or expertise to conduct the analyses needed for this program, and for testing of other types of embedded fragments, such as cobalt, nickel, and tungsten. VA officials indicated that testing for DU and other potentially harmful embedded fragments plays an important role in providing high quality health care to recently injured combat veterans. As we previously discussed in this report, DOD officials are considering the possibility of retaining DU testing. Obtaining consultation services. Several military pathologists expressed concerns about the challenges DOD and VA would face in identifying and obtaining needed subspecialty expertise from pathologists. These concerns stem, in part, from their understanding of AFIP’s capabilities to provide consultations for difficult-to-diagnose cases by involving different types of subspecialty pathologists as needed. Within AFIP, cross-consultation among experts is available under one roof. As DOD will have to determine a new method for obtaining consultations using the PMO, military pathologists expressed concerns that it might be more difficult to access expertise dispersed among different institutions to obtain accurate diagnostic information. DOD and VA pathologists also expressed concerns regarding whether continuity of patient care would be maintained for retired military personnel if pathology specimens from active duty personnel and veterans are no longer sent to one central laboratory, such as AFIP. At present, if a patient has had a previous consultation, the material is available from the repository for comparison if AFIP is requested to conduct another consultation at a later date for the same patient. This can be important for the patient’s care—for example, in determining if a patient’s cancer is metastasizing or if a precancerous condition is worsening. AFIP pathologists expressed concern that patient care could be compromised if the pathologists providing consultations could no longer obtain their patients’ previous specimens, slides, or case notes from the repository. In addition, according to an AFIP pathologist, the repository is particularly valuable for AFIP’s consultation services because it can serve as a reference tool to compare pathology material from one patient to that of many others to confirm a diagnosis. VA and AFIP pathologists have raised concerns about whether alternate sources of consultation services obtained through the PMO will be able to provide the same continuity or quality of service unless pathologists from these alternate sources can use the repository as a reference. Further, DOD pathologists expressed concern about whether private sector institutions with the best subspecialty pathology expertise can absorb the 40,000 consultations that have been conducted by AFIP annually. DOD pathologists also indicated that as of August 2007, DOD had not yet developed a management strategy to address this challenge. Timeliness of consultation services. DOD pathologists we interviewed are also concerned that obtaining consultations may take longer than it does under AFIP because it is unclear how DOD will identify and obtain needed pathology expertise. Timeliness of consultation services is important. For example, understanding the aggressiveness and particular stage of a cancer in a given point in time can influence patient treatments and outcomes. Some pathologists also anticipate that turnaround time for DOD’s consultations may increase due to difficulty coordinating among pathologists with varied subspecialty expertise that are dispersed among different institutions and that this could impair the quality of services that DOD obtains. As of August 2007, DOD had not outlined the management strategy that it will pursue to ensure timely access to consultative services. Funding mechanisms. DOD pathologists’ access to subspecialty pathology expertise can also be impacted depending on how DOD plans to mitigate funding incentives related to centralization or decentralization of the budget. According to DOD officials, as of July 2007, DOD had not made decisions regarding whether the budget for consultations would be maintained centrally at the PMO or if each MTF would receive a separate budget for outsourced consultations. Because DOD pathologists did not have to pay for AFIP’s consultation services, there was no financial disincentive to use them. Several pathologists we interviewed expressed concern that decentralized funding for consultation services would create disincentives to obtaining consultations and could ultimately affect the quality of the medical care the military would receive for such services. More specifically, these officials asserted that a decentralized funding system would require a Department of Pathology Chair within an MTF to scrutinize the department’s competing demands for resources and make decisions about whether to obtain outside pathology expertise or spend financial resources on other patient care needs. VA pathologists also expressed concern that funding issues could contribute to increasing the difficulty of obtaining subspecialty consultations. If pathologists cannot obtain subspecialty consultations when they are unsure of their diagnosis, patients might be misdiagnosed. This is particularly relevant since, as we discussed earlier in this report, AFIP has changed requesting physicians’ initial diagnoses for about 43 percent of the cases it reviews. Minimizing costs of services through volume discounts. By working with VA, DOD could further increase its economies of scale by purchasing a higher volume of consultation services. However, several DOD and VA pathologists expressed concerns that if DOD chooses to obtain services from the lowest bidder, the quality of consultations could be compromised. They informed us that large national laboratories would likely be the lowest bidders, but these institutions might lack the subspecialty expertise to provide the best services. In fact, such large national laboratories currently use AFIP consultation services. Further, DOD pathologists we interviewed expressed concern for their patients’ care with respect to whether DOD would obtain the best subspecialty consultations possible. DOD has formed a working group, which met for the first time in August 2007, to address issues pertaining to obtaining consultations. This group includes representatives from the Offices of the Surgeons General of the Army, Navy, and Air Force, as well as other DOD and VA officials. According to DOD officials, the workgroup spent its first meeting identifying the challenges faced by DOD in obtaining needed expertise but had not yet developed specific options to address the challenges. Because DOD has not developed its strategy regarding how it will populate, maintain, and use the repository, some pathologists we interviewed were concerned about the future of the repository and whether it would continue to be a viable research tool. Recently, USUHS awarded a contract to study the usefulness of the pathology material in the repository. According to DOD, once that study is completed in October 2008, USUHS plans to convene a panel of experts to develop a blueprint on how to use the repository for research, and then will likely contract for development of a detailed plan on how to best populate, manage, and use the repository. USUHS does not intend to finalize key decisions until that process is complete. USUHS officials told us that one of the challenges they face in the future is how they will populate pathology material in the repository in order to maintain its viability as a research tool. They explained that AFIP generally populates its repository through pathology material obtained from its consultation services. As a result, the repository includes material from the DOD, VA, and civilian populations. Additionally, AFIP’s Radiologic- Pathologic Correlation course has historically contributed to the growth of pathology material in the repository because students, who are primarily civilians, are required to submit samples to AFIP that have pathologic significance. We estimate that the repository gains approximately 1,200 to 2,400 samples per year from students attending this course. Pathologists we interviewed explained that the value of the material in the repository is related to the number of cases it accumulates for a particular disease. That is, in order for a researcher to be able to identify the characteristic patterns of a disease allowing for its diagnosis and treatment, there must first be a sufficient number of cases of the particular disease. USUHS officials told us that due to the large volume of cases that AFIP accumulated in the repository, including complex cases, researchers can currently conduct studies of considerable sample size. Thus, the manner in which USUHS plans to continue to accumulate material in the repository can influence the pace of research. Because USUHS does not provide pathology consultations, in the absence of civilian consultations it will need to develop other strategies to populate the repository. The strategy that USUHS officials discussed with us was to populate the repository with specimens from military hospitals. Populating the repository in this manner, however, could skew the repository since military hospitals generally draw patients that are largely young, male, and active. This could decrease the usefulness of the repository, ultimately affecting the breadth of research. As a result, it is important that USUHS develop a strategy to determine how it will populate the repository, considering both the quantity of pathology material for each disease as well as the quality and type of material from which it draws. DOD, VA, and civilian pathologists we interviewed also recognize that proper maintenance of pathology material is necessary for retaining the repository’s optimal usefulness. Specifically, as medical knowledge of tumors and other conditions evolves, material requires reclassification by pathologists with subspecialty expertise in order to be useful. As such, repositories can become useless without continuous update and evaluation. Officials from academic centers that we spoke with said that the failure to preserve, maintain, and update the repository would be a tremendous loss to pathology, and general medicine overall. USUHS officials said that having staff pathologists with subspecialty expertise responsible for properly classifying pathology material is important to the repository’s viability. USUHS discussed with us that it may employ about 10 to 12 pathologists with subspecialty expertise who would be responsible for reclassifying material in the repository as needed. USUHS officials expressed a desire to expand the use of the repository to others outside of DOD—such as pharmaceutical companies and cooperative ventures with other academic institutions—so that the repository’s role in general medical research could continue and benefit the general population. However, USUHS officials said that they first need to determine policy, financial, and legal ramifications, such as patient privacy issues, before they make any decisions regarding research access to the repository assets. USUHS officials also told us that the pathologists they hire would have access to pathology material in the repository and would also be responsible for conducting militarily relevant research. AFIP is a noted institution that has provided pathology expertise in a range of subspecialty areas, and its customers value the services that it provides. Congress has mandated that DOD provide a detailed plan on disestablishing AFIP by December 2007, which gives DOD an opportunity to address potential challenges involved with closing the facility. DOD awarded a contract to study the usefulness of the material in the repository, which it anticipates to be completed by the end of 2008. DOD anticipates using the study, a subsequent panel of experts, and a possible second contract to develop a detailed implementation plan to help make decisions on how the repository will be managed. As part of its planning process, it is critical for DOD’s plan to go beyond the steps to terminate, relocate, or outsource AFIP’s services and include implementation strategies that detail how it will organize consultation services and manage the repository in the future. DOD has not yet developed these strategies— strategies that could help mitigate potential negative impacts of disestablishing AFIP and facilitate a smooth transition as DOD looks to other sources for obtaining high-quality pathology services. As part of DOD’s initiative to develop a plan for disestablishing AFIP, we are making three recommendations to the Secretary of Defense that could help mitigate potential negative impacts of disestablishing AFIP. We recommend that the Secretary of Defense include in the December 2007 plan to Congress implementation strategies for how DOD will use existing in-house pathology expertise available within MTFs, identify and obtain needed consultation services from subspecialty pathologists with appropriate expertise through the PMO in a timely manner, and solidify the source and organization of funds to be used for outsourced consultation services. Within 6 months of completion of DOD’s study regarding the usefulness of the pathology material in the repository that is to be finished in October 2008, the Secretary should require USUHS to provide Congress with information on the status of the repository’s assets and their potential for research use. Prior to USUHS assuming responsibility for the repository, the Secretary should provide a report to Congress on its implementation strategies for how it will populate, manage, and use the repository in the future. The implementation strategies should include information on how USUHS intends to use pathology expertise to manage the material, obtain pathology material from a wide variety of individuals, maximize availability of the repository for research through cooperative ventures with other academic institutions, and assist interested groups—if any—in supporting the continuation of educational services, such as the Radiologic-Pathologic Correlation course. DOD and VA provided written comments on a draft of this report, included in appendix IV and appendix V. In commenting on a draft of this report, DOD concurred with the report’s findings and conclusions and fully concurred with our recommendation for DOD to include its implementation strategies for organizing future pathology consultation services in its December 2007 plan to the Congress. However, DOD partially concurred with the recommendation to report to the Congress within 6 months of completing its study on the viability of the repository. Specifically, DOD indicated that USUHS would not be in a position to report its strategies on managing the repository until further work was completed. As a result, we modified our recommendation to limit the reporting requirement to information on the viability of material in the repository and its usefulness for research. We also added another recommendation that DOD should report to Congress at a later date on USUHS’s planned strategies for managing the repository. In its written comments, VA agreed that the draft report was factually accurate, but indicated that it did not fully capture the essential nature of AFIP’s services to VA and DOD or fully address the impact of its closing. We believe that we provided a balanced assessment of AFIP’s services and the impact of its closing. In its comments, DOD agreed with the description of the challenges it faces in developing new approaches to obtaining pathology expertise through the PMO and managing the repository to ensure that it remains a rich resource for civilian and military research. DOD emphasized that it was in the process of developing alternative strategies that would be coordinated internally and with VA to ensure that the strategies would meet DOD’s needs, assist the VA, and be in accordance with BRAC recommendations. DOD concurred with our recommendation that the Secretary of Defense should include in the December 2007 plan to Congress implementation strategies for how DOD will use existing in- house pathology expertise available within MTFs, identify and obtain needed consultations from subspecialty pathologists with appropriate expertise through the PMO in a timely manner, and solidify the source and organization of funds to be used for outsourced consultation services. In addition, DOD agreed that the Secretary of Defense should submit a plan to Congress within 6 months of completion of the repository evaluation contract to provide information on the status of pathology material in it and its research potential. However, DOD indicated that the results of the evaluation contract will likely result in another contract to help develop a detailed strategy on how USUHS will populate, manage, and use the repository. Therefore, DOD will not be able to report on how USUHS will populate, manage, and use the repository within 6 months of completion of the repository evaluation contract and did not concur with that portion of the draft recommendation. Given this, we modified our recommendations in this report to reflect the steps DOD anticipates taking. Specifically, we separated the recommendations to address reporting on the viability of the repository material and the strategies for its maintenance and use. In commenting on a draft of this report, VA indicated that the report was factually accurate, but did not sufficiently describe the potential impact associated with closing AFIP. VA focused on five concerns—DU testing, stagnation of the repository, difficulties in replacing AFIP’s consultation services and obtaining them through the PMO, potential impact on patient care, and the potential costs to replace existing services. VA commented that AFIP’s testing of DU and other types of potentially harmful embedded fragments was essential to providing quality health care to recently injured veterans. VA indicated that our report did not sufficiently emphasize the importance of these AFIP services. While the report clearly states that DOD is considering retaining DU testing, we added additional text in this report to highlight VA’s concerns, including those about testing other types of potentially harmful embedded fragments. VA also indicated that the repository contained a large archive of veterans’ pathology specimens that would be invaluable for future clinical and research endeavors and expressed concern that DOD will allow the repository to stagnate upon closure of AFIP. Our report acknowledges the importance of the repository to veterans’ care. This is why we discussed the challenges of maintaining a viable repository in the report and made a specific recommendation that DOD provide information on future plans for it. Regarding consultation services, VA expressed concerns that other institutions may not have the capacity to absorb AFIP’s workload; some types of services might not be available; and obtaining services through the PMO may adversely affect timeliness and make it more complex and inefficient for local facilities to obtain pathology services. In our report, we discussed such concerns and stated that DOD faces challenges in obtaining expertise similar to what AFIP offered. As a result, we recommended that DOD report to the Congress on how it would address these challenges and obtain pathology services in the future. VA stated that the report did not fully discuss the impact of closing AFIP on patient care—especially the significance of changing diagnoses and of providing timely services. We disagree. The draft report clearly states that changing a diagnosis can lead to different treatment and, ultimately, a different outcome for the patient. The report also states that timeliness is important because it can affect patient treatment and outcomes. VA appears to assume that DOD will not be able to obtain timely and quality consultative services through the PMO. In the report, we stated that obtaining quality consultation services in a timely manner through the PMO is one of the challenges that DOD would have to address. Until DOD develops its strategies, we would not have a basis to determine whether it would be likely to meet this challenge. VA commented on the potential high cost in procuring alternative sources for AFIP’s services. We did not conduct an overall assessment of whether it would cost DOD more to obtain consultations from other sources than it would to maintain AFIP. DOD considered costs when developing its recommendation to the BRAC commission to outsource consultations. However, as we have reported previously, implementing other BRAC recommendations has led to lower cost savings than DOD had estimated. Regarding the costs for VA, we state in our report that earlier studies had found that the costs of the services that AFIP provided to VA exceeded the value of the paid positions VA provided in exchange. AFIP officials indicated that this continued to be true in fiscal year 2007. As a result, depending on how and where VA obtains consultation services, its costs could increase. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from this date. At that time, we will send copies of this report to the Secretary of Defense, the Secretary of VA, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report please contact me at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. To describe key services that the Armed Forces Institute of Pathology (AFIP) provides to the Department of Defense (DOD), the Department of Veterans Affairs (VA), and civilian communities, we reviewed recent reports describing AFIP’s services and business practices, including a previous GAO report and an Army Audit Agency report on AFIP’s business plan and a BearingPoint report on AFIP’s capabilities, and other relevant reports, including some from VA. We also interviewed officials from AFIP, DOD, VA, the American Registry of Pathology (ARP), pathology associations such as the College of American Pathologists (CAP), the American Society for Investigative Pathology, and the Association of Pathology Chairs, as well as radiology associations, such as the American College of Radiology and the Canadian Radiology Association, to collect information on AFIP’s core services. Additionally, we obtained data from AFIP on the services it provides. To assess the reliability of these data, we interviewed knowledgeable agency officials and reviewed related documentation. We determined that the data were sufficiently reliable for the purposes of this report. To describe DOD’s plans to terminate, relocate, or outsource services currently provided by AFIP, as required by the Base Realignment and Closure (BRAC) provision, we interviewed officials from DOD’s Offices of the Surgeons General of the Army, Navy, and Air Force; the Office of the Assistant Secretary of Defense for Health Affairs; the Office of the General Counsel; the TRICARE Management Activity; the Office of the Deputy Under Secretary of Defense (Installations and Environment); AFIP; and the Uniformed Services University of the Health Sciences (USUHS). We also interviewed pathologists from military treatment facilities (MTF) and VA medical centers. In addition, we reviewed the BRAC business plan for the Walter Reed Army Medical Center and related assumptions and analysis that led to the BRAC decisions. To assess the potential impacts of disestablishing AFIP on the military and civilian communities, we interviewed pathologists from AFIP, ARP, five MTFs and five VA medical centers, as well as civilian pathologists from four major medical centers. We interviewed representatives from pathology and radiology associations, including ARP, CAP, the American Society for Investigative Pathology, the Association of Pathology Chairs, the American College of Radiology, and the Canadian Association of Radiologists, to obtain their views regarding the potential impact of discontinuing AFIP’s core services. In addition, we reviewed data from various reports and other documents to assess the potential impact of discontinuing the three key services as AFIP currently provides. We performed our work from March 2007 through November 2007 in accordance with generally accepted government auditing standards. In 2006, AFIP provided almost half of its consultations for DOD, with the rest predominantly for VA and civilian physicians. (See fig. 2 for the 2006 distribution of AFIP’s DOD consultations, fig. 3 for its VA consultations, and fig. 4 for its civilian consultations.) Appendix III: Description of Services Performed by the Armed Forces Institute of Pathology (AFIP) Legal Medicine: Legal Medicine provides consultation, education, and research on medical legal, quality assurance, and risk management issues to the Department of Defense (DOD); manages a registry of closed DOD medical malpractice cases; manages the DOD Centralized Credentials Quality Assurance System; assists the Uniformed Services University of the Health Sciences (USUHS) with the masters degree program in Forensic Sciences; awards continuing medical education (CME) credits in medical legal, quality assurance, and risk management to nurses and physicians; and publishes the journals Legal Medicine and Nursing Risk Management. National Museum of Health and Medicine: The National Museum of Health and Medicine was established during the Civil War as the Army Medical Museum. The Museum promotes the understanding of medicine from past, present, and future, with a special emphasis on American military medicine. It has five major collections: Anatomical, Historical, Otis Historical Archives, Human Developmental Anatomy Center, and Neuroanatomical, which are estimated to contain more than 24 million objects. Repository: The National Pathology Repository contains approximately 3 million case files and associated paraffin blocks, microscopic glass slides, and formalin-fixed tissue specimens. Tens of thousands of cases are added to the repository each year. Staff code all material for future research use. The Office of the Armed Forces Medical Examiner, DNA (deoxyribonucleic acid) Registry, and Accident Investigation: The Office of the Armed Forces Medical Examiner conducts scientific forensic investigations for determining the cause and manner of death of members of the Armed Forces and of civilians whose deaths come under exclusive federal jurisdiction. The office provides consultative services in forensic pathology, forensic toxicology, forensic anthropology, and DNA technology, as well as on-site medical legal investigations of military accidents. It is the only federal resource of its kind, so other federal agencies frequently use its services. The DOD DNA Registry is at the forefront of nuclear and mitochondrial DNA technology, supports the Office of the Armed Forces Medical Examiner in identification, and serves as the repository for specimens obtained from military personnel to be used for identification. Enlisted histology technician training: The Tri-Service School of Histotechnology is the only military histopathology training program, according to a DOD official. It consists of 180 training days in the technical operations of anatomic pathology. Training includes instruction in the theory and application of histotechnology and practical training in the fixation, processing, embedding, microtomy, and staining of tissue specimens prior to examination by a pathologist. The curriculum also includes instruction and practical experience as a postmortem examination (autopsy) assistant. Program Management Office (PMO): The PMO will be newly established to coordinate pathology results, contract administration, and quality assurance and control of DOD second-opinion consults worldwide. DOD Veterinary Pathology Residency Program: The DOD Veterinary Pathology Residency Program is a 3-year postdoctoral training program. Residents are involved in consultation, education, and research during the program. The residency culminates in a 2-day examination given by the American College of Veterinary Pathologists, and successful completion of this examination results in board certification in veterinary anatomic pathology. Automated Central Tumor Registry: The Automated Central Tumor Registry provides the uniformed services MTFs with the capability to compile, track, and report cancer data on DOD beneficiaries. The objective of the registry is to maintain a research quality database for cancer reporting that supports outcome analysis, referral patterns, trend analysis, statistical reporting, health care analysis, epidemiology, and uniform data collection and tracking. Center for Clinical Laboratory Medicine: The Center for Clinical Laboratory Medicine directs the operation of the DOD Clinical Laboratory Improvement Program, as defined by DOD Instruction 6440.2 and Public Law No. 100-578; administers law and federal policy for military medical laboratory operations in peace, contingency, and wartime, ensuring that no restrictions or cessation of laboratory services impedes DOD mission requirements; and acts as gatekeeper for DOD and Centers for Disease Control and Prevention (CDC) initiatives to develop a biological warfare detection and response system, that is, National Laboratory Response Network. Patient Safety Center: The Patient Safety Center manages a comprehensive patient safety data registry for DOD. The DOD Patient Safety Registry is a database that gathers standardized clinically relevant information about all instances and categories of actual events and close calls. This registry is used to identify and provide feedback on systemic patterns and practices that place DOD patients at risk, and thereby it stimulates, initiates, and supports local interventions designed to reduce risk of errors and to protect patients from inadvertent harm. The Patient Safety Center publishes DOD Patient Safety Alerts, and it produced the first Patient Safety Toolkit targeting patient fall reduction. Diagnostic telepathology: The practice of pathology involves using telecommunications to transmit data and images between two or more sites remotely located from each other, according to a DOD official. The data include clinical information about the patient, such as signs, symptoms, treatment, and response; gross description of the surgical specimen(s); and digital images of the processed specimen. These data are transmitted electronically, allowing a pathologist practicing in a geographically distant site to consult another pathologist for a second opinion, or to consult other pathologists who are experts on particular disease processes. Biodefense Project – The Joint Biological Agent Identification and Diagnostic System: The Joint Biological Agent Identification and Diagnostic System pertains to a rapid identification and diagnostic confirmation of biological agent exposure or infection, according to a DOD official. The standalone system consists of a portable unit to perform sample analysis, a laptop computer for readout display and assay reagent test kits to identify multiple biological warfare agents, infectious disease agents, and biological toxins. Biodefense Project – The Critical Reagent Program: The Critical Reagent Program provides bulk quantities of DNA extracted from selected biological threat agents, according to a DOD official. These are then used to develop validated, high-quality immunological and DNA-based biodetection reagents to support different biological warfare agent detector platforms. Reserve Biological Select Agent Inventory: The Reserve Biological Select Agent Inventory is registered with CDC and with the Army Medical Command, and includes over 1,500 strains of controlled biological select agents and toxins, according to a DOD official. These are stored in freezers in secure Biosafety Laboratory level 3 areas of AFIP. Storage, use, and transfer of any agents or toxins is strictly controlled and regulated by CDC and Army regulations. Depleted uranium (DU) testing: DU Urine Testing supports medical surveillance programs by measuring the levels of uranium in patients’ urine and identifies the specific source of exposure by accurately measuring uranium isotopic ratios, according to a DOD official. DU Testing in Body Fluids and Tissue provides chemical analysis of embedded DU fragments in tissues removed from shrapnel wounds. Cystic fibrosis testing: A test for cystic fibrosis is one of several tests for genetically inherited diseases that are recommended by the Department of Health and Human Services’ Health Resources and Services Administration and the American College of Medical Genetics. AFIP ceased cystic fibrosis testing on June 1, 2007. All DOD cystic fibrosis tests are currently being performed by commercial labs or other DOD labs. In addition to the contact named above, Sheila Avruch, Assistant Director; Adrienne Griffin; Cathy Hamann; Nora Hoban; Jasleen Modi; Carolina Morgan; and Andrea Wysocki made key contributions to this report. Military Base Realignments and Closures: Transfer of Supply, Storage, and Distribution Functions from Military Services to Defense Logistics Agency. GAO-08-121R. Washington, D.C.: October 26, 2007. Defense Infrastructure: Challenges Increase Risks for Providing Timely Infrastructure Support for Army Installations Expecting Substantial Personnel Growth. GAO-07-1007. Washington, D.C.: September 13, 2007. Military Base Realignments and Closures: Plan Needed to Monitor Challenges for Completing More Than 100 Armed Forces Reserve Centers. GAO-07-1040. Washington, D.C.: September 13, 2007. Military Base Realignments and Closures: Observations Related to the 2005 Round. GAO-07-1203R. Washington D.C.: September 6, 2007. Military Base Closures: Projected Savings from Fleet Readiness Centers Likely Overstated and Actions Needed to Track Actual Savings and Overcome Certain Challenges. GAO-07-304. Washington, D.C.: June 29, 2007. Military Base Closures: Management Strategy Needed to Mitigate Challenges and Improve Communication to Help Ensure Timely Implementation of Air National Guard Recommendations. GAO-07-641. Washington, D.C.: May 16, 2007. Military Base Closures: Opportunities Exist to Improve Environmental Cleanup Cost Reporting and to Expedite Transfer of Unneeded Property. GAO-07-166. Washington, D.C.: January 30, 2007. Military Bases: Observations on DOD’s 2005 Base Realignment and Closure Selection Process and Recommendations. GAO-05-905. Washington, D.C.: July 18, 2005. Military Bases: Analysis of DOD’s 2005 Selection Process and Recommendations for Base Closures and Realignments. GAO-05-785. Washington, D.C.: July 1, 2005. Armed Forces Institute of Pathology: Business Plan’s Implementation Is Unlikely to Achieve Expected Financial Benefits and Could Reduce Civilian Role. GAO-05-615. Washington, D.C.: June 30, 2005. Military Base Closures: Updated Status of Prior Base Realignments and Closures. GAO-05-138. Washington, D.C.: January 13, 2005. Military Base Closures: Assessment of DOD’s 2004 Report on the Need for a Base Realignment and Closure Round. GAO-04-760. Washington, D.C.: May 17, 2004. Military Base Closures: Observations on Preparations for the Upcoming Base Realignment and Closure Round. GAO-04-558T. Washington, D.C.: March 25, 2004.
The 2005 Base Realignment and Closure (BRAC) provision required the Department of Defense (DOD) to close the Armed Forces Institute of Pathology (AFIP). GAO was asked to address the status and potential impact of implementing this BRAC provision. This report discusses (1) key services AFIP provides to the military and civilian communities; (2) DOD's plans to terminate, relocate, or outsource services currently provided by AFIP; and (3) the potential impacts of disestablishing AFIP on military and civilian communities. New legislation requires DOD to consider this GAO report as it develops its plan for the reorganization of AFIP. GAO reviewed DOD's plans, analysis, and other relevant information, and interviewed officials from the public and private sectors. AFIP pathologists perform three key services--diagnostic consultations, education, and research--primarily for physicians from DOD, the Department of Veterans Affairs (VA), and civilian institutions. AFIP provides consultations when physicians cannot make a diagnosis or are unsure of their initial diagnosis. About half of its 40,000 consultations in 2006 were for DOD physicians, and the rest were nearly equally divided between VA and civilian physicians. AFIP's educational services train physicians in diagnosing the most difficult-to-diagnose diseases. Civilian physicians use these services more extensively than military physicians. In addition, AFIP pathologists collaborate with others on research applicable to military operations and general medicine, often using material from AFIP's repository of tissue specimens to gain a better understanding of disease diagnosis and treatment. To implement the 2005 BRAC provision, DOD plans to terminate most services currently provided by AFIP and is developing plans to relocate or outsource others. DOD plans to outsource some diagnostic consultations to the private sector through a newly established office and use its pathologists for consultations when possible. With the exception of two courses, DOD does not plan to retain AFIP's educational program. DOD also plans to halt AFIP's research and realign the repository, which is AFIP's primary research resource. The BRAC provision allows DOD flexibility to retain services that were not specifically addressed in the provision. As a result, DOD will retain four additional AFIP services and is considering whether to retain six others. DOD had planned to begin implementation of the BRAC provision related to AFIP in July 2007 and complete action by September 2011, but statutory requirements prevent DOD from reorganizing or relocating AFIP functions until after DOD submits a detailed plan and timetable for the proposed implementation of these changes to congressional committees no later than December 31, 2007. Once the plan has been submitted, DOD can resume reorganizing and relocating AFIP. Discontinuing, relocating, or outsourcing AFIP services may have minimal impact on DOD, VA, and civilian communities because pathology services are available from alternate sources, but a smooth transition depends on DOD's actions to address the challenges in developing new approaches to obtaining pathology expertise and managing the repository. For consultations, these challenges are to determine how to use existing pathology resources, obtain outside expertise, and ensure coordination and funding of services to avoid disincentives to quality care. While DOD has begun to identify the challenges, it has not developed strategies to address them. Similarly, whether the repository will continue to be a rich resource for military and civilian research depends on how DOD populates, maintains, and provides access to it in the future, but DOD has not developed strategies to address these issues. DOD contracted for a study, due to be completed in October 2008, of the usefulness of the material in the repository. DOD plans to use this study to help make decisions about managing the repository.
The annual number of fatalities from crashes involving large trucks increased 20 percent from 4,462 in 1992 to 5,355 in 1997 (see fig. 1). This result reversed a trend of decreasing truck fatalities in the previous 5-year period, 1988-92. Also during the 1992-97 period, the fatality rate—the number of fatalities per 100 million miles traveled by large trucks—has remained fairly constant at about 2.9 after decreasing by 27 percent between 1988 and 1992. The recent increases in annual fatalities reflect in part increases in truck travel: the number of miles traveled increased by 25 percent from 1992 to 1997. If truck travel continues to increase at this rate, and nothing is done to reduce the fatality rate, the annual number of fatalities could increase to 5,800 in 1999 and to more than 6,000 in 2000 (see fig. 2). While we are concerned that the number of fatalities from crashes involving large trucks could increase in the next few years, only about 1 percent of all truck crashes reported to police in 1997 resulted in a fatality. About 99 percent resulted in injuries or property damage only. From 1988 through 1997, the number of people injured each year increased overall from 130,000 to 133,000. During the same period, the number of injuries per 100 million miles traveled fell from 92 to 69. In addition, the annual number of crashes involving large trucks that resulted in property damage only increased from 291,000 to 329,000 while the number of these crashes per 100 million miles traveled decreased from 206 to 172. For each mile that they traveled between 1988-97, large trucks were involved in fewer total crashes than cars were. However, large trucks were involved in a greater number of fatal crashes per mile traveled (see fig. 3). The higher fatal crash rate for large trucks is not surprising, considering the difference in weight between large trucks and cars. When there is such a mismatch in weight between the vehicles involved in a crash, the lighter one and its occupants tend to suffer more damage. In fatal crashes between large trucks and cars in 1997, 98 percent of the fatalities were occupants of the car. While no definitive information on the causes of fatal crashes exists, there is information on factors that may contribute to these crashes. Data from the National Highway Traffic Safety Administration’s Fatality Analysis Reporting System show that errors on the part of car drivers have been cited more frequently as contributing factors to crashes between large trucks and cars. In fatal crashes, police report driver errors or other factors related to a driver’s behavior that contributed to the crash. In 98 percent of the fatal crashes between large trucks and cars in 1997, driver factors were recorded for one or both drivers. Errors by car drivers were reported in 80 percent of the crashes, while errors by truck drivers were reported in 28 percent of the crashes. The inference that car drivers were more often “at fault” than truck drivers has been disputed by safety groups. These groups maintain that because far more truck drivers than car drivers survive fatal crashes between large trucks and cars, more truck drivers have the opportunity to tell the officer at the crash scene their version of how the crash occurred. However, a recent study found that in fatal crashes in 1994 and 1995 in which both the truck driver and the car driver survived, car driver errors were cited in 74 percent of the crashes compared to 35 percent for truck driver errors. This finding lends some support to the hypothesis that, compared to truck drivers, car drivers contribute more to fatal crashes between large trucks and cars. One driver factor—truck driver fatigue—was identified as the number one issue affecting the safety of motor carriers during a 1995 safety meeting of representatives from government, trucking associations, and safety interest groups. When truck driver fatigue contributes to truck crashes, truck drivers are killed more often than someone outside the truck. From 1992 through 1997, fatigue was cited by police officers for 11 percent of truck drivers in crashes that were fatal to the truck occupant(s) only. In contrast, fatigue was cited for less than 1 percent of truck drivers in crashes that were fatal to people besides truck occupants, such as car occupants or pedestrians. However, these figures may significantly underestimate the actual proportion of fatal truck crashes attributable to fatigue because of the difficulty of determining the pre-crash condition of the driver after a crash occurs. OMCHS estimates that truck driver fatigue is the primary factor in 15 to 33 percent of the crashes that are fatal to the truck occupant(s) only, and 1 to 2 percent of crashes that are fatal to people besides the truck occupant(s). Furthermore, the National Transportation Safety Board estimates that truck driver fatigue is the probable cause of 31 percent of crashes involving trucks over 26,000 pounds that are fatal to the driver. Mechanical defects, such as worn brakes or a bald tire, have also been cited as a contributing factor to crashes involving large trucks. According to estimates in several studies, the percentage of such crashes that are attributed to mechanical failure ranges from 5 to 13 percent. In addition, in a 1996 study, OMCHS estimated that 29 percent of all large trucks had mechanical defects severe enough to warrant placing the vehicles out of service. While we do not know whether any of these large trucks had crashes as a result of their defects, they probably presented a higher crash risk than large trucks without defects. Other factors that may contribute to crashes or that may affect whether a fatality occurs in a crash include drivers’ blood alcohol concentration and use of safety belts. These measures suggest that truck drivers who are involved in fatal crashes might be more safety conscious than car drivers involved in such crashes. For example, in fatal crashes between large trucks and cars in 1997, about 1 percent of truck drivers had blood alcohol concentrations of 0.10 or above, compared to 15 percent of car drivers. In addition, 75 percent of truck drivers were wearing their safety belt in fatal crashes between a large truck and a car in 1997, compared to 47 percent of car drivers. The Federal Highway Administration has established a goal for 1999 of reducing the number of fatalities from crashes involving large trucks to fewer than 5,126—the number of fatalities that occurred in 1996. This goal is substantially below the projected figure of 5,800 for 1999 if recent trends continue. OMCHS has undertaken a number of activities that it believes will accomplish this short-term goal. While these activities could have a positive effect on truck safety issues over the long term if effectively implemented, OMCHS is not likely to reach its goal for 1999. This is because (1) its initiative to target high-risk carriers for safety improvements depends on data that are not complete, accurate, or timely, (2) major components of several activities will not be completed before the end of 1999, and (3) the effectiveness of OMCHS’ educational campaign to improve car driver behavior is unknown. OMCHS’ activities are just one of many factors that affect the level of truck safety. OMCHS’ activities—either directly or through grants provided to states—are intended to improve truck safety largely by influencing the safety practices of trucking companies and the behavior of truck drivers. There are other factors that affect truck safety that OMCHS does not directly influence, such as the use of safety belts by car occupants, highway design standards, trucks’ and cars’ handling and crashworthiness characteristics, traffic congestion, local traffic laws and enforcement, and state initiatives. Each year, OMCHS and state inspectors conduct thousands of on-site reviews of motor carriers’ compliance with federal safety regulations, known as compliance reviews. To identify high-risk carriers for these reviews, OMCHS uses a safety status measurement system known as SafeStat. SafeStat relies heavily on data from OMCHS’ motor carrier management information system (MCMIS) to rank motor carriers on the basis of four factors: (1) crashes, (2) driver factors, (3) vehicle factors, and (4) safety management. The crash factor is given twice the weight of the other factors because carriers that have been in crashes are considered more likely to be involved in crashes in the future. Carriers that are ranked in the worst 25 percent of all carriers for three or more factors or for the accident factor plus one other factor are targeted for a compliance review. However, SafeStat’s ability to accurately target high-risk carriers is limited because state officials do not report a large percentage of crashes involving large trucks to MCMIS. For 1997, OMCHS estimated that about 38 percent of all reportable crashes and 30 percent of the fatal crashes involving large trucks were not reported to MCMIS. Furthermore, 10 states reported fewer than 50 percent of the fatal crashes occurring within their borders, including four states that reported fewer than 10 percent. Because MCMIS does not contain a record of all crashes, a carrier that has been involved in a substantial number of crashes might go undetected by SafeStat. According to OMCHS officials, states do not report all crashes for several reasons. In particular, (1) states do not understand that complete reporting would enable OMCHS to more accurately target high-risk carriers, (2) state employees who submit crash data to MCMIS may not have sufficient training or incentives, or (3) there may be errors in some states’ databases that are preventing the transmittal of the data. According to OMCHS officials, an initiative to encourage states to report data for all crashes in a consistent manner is being developed; no implementation date has been set. SafeStat’s ability to target high-risk carriers is also limited by out-of-date census data in MCMIS. SafeStat uses the census data—such as the number of trucks operated by each carrier—to normalize safety data. For example, SafeStat checks the number of crashes reported for a carrier against the number of trucks operated by the carrier to determine if the number of crashes is disproportionate. However, interstate carriers are required to file census data with OMCHS only once—when they initially go into business. After that, the census data are updated generally only when OMCHS or states conduct compliance reviews at the carriers’ facilities. Each year from 1993 through 1997, these reviews were conducted for fewer than 4 percent of these carriers listed in MCMIS, whose number increased from 275,000 to more than 415,000 over the period. According to OMCHS officials, a system to update census data annually will not be implemented for at least 2 years. As we reported in 1997, states have improved the timeliness of reporting the results of the roadside inspections, compliance reviews, and crashes that are used by SafeStat. However, they are still not meeting OMCHS’ reporting deadlines. OMCHS’ December 1996 guidance to states includes deadlines to report the results of roadside inspections and compliance reviews within 21 days, and crashes within 90 days. As shown in table 1, states improved the timeliness of reporting data to MCMIS from fiscal year 1997 to 1998 but were missing OMCHS’ deadlines by an average of 8 to 16 days. Data problems also exist at the state level. In fiscal year 1998, all states submitted performance-based safety plans to OMCHS for the first time. Under these plans, states must identify areas that need improvement, such as sections of highways where a disproportionate number of crashes involving large trucks have occurred, and develop a plan for improving those areas. In a pilot program to implement performance-based plans, 5 of the 13 pilot states reported that they lacked sufficient or timely data to accurately identify areas that need improvement. OMCHS officials said that insufficient data—such as carrier size information that is used to help states focus their safety education programs for carriers—have also been a problem for some states once they have identified problem areas and are developing improvement plans. Several of OMCHS’ activities that could improve large truck safety—including revising the rule governing the number of hours that truck drivers can drive and targeting high-risk carriers through the number of citations drivers receive—will not be completed before the end of 1999. The ICC Termination Act of 1995 directed the Federal Highway Administration to modify the existing hours of service rule for commercial motor vehicles to incorporate countermeasures for reducing fatigue-related incidents, such as crashes. The act required the Administration to issue an advance notice of proposed rulemaking by March 1, 1996; this notice was issued on November 5, 1996. The act also required a proposed rule within one year after the advance notice, and a final rule within two years after that one year deadline. The Administration has not issued a proposed rule. OMCHS officials explained that revising the rule is a difficult and very contentious issue and the final rule will not be issued until 2000 or later. In addition, OMCHS has concluded that high-risk carriers can be more accurately targeted by tracking the number of citations issued to each carrier’s drivers. A 1997 report prepared for the Federal Highway Administration found that trucking companies with higher rates of citations—for such things as overweight vehicles or moving violations—are also more likely to have higher accident rates. OMCHS officials have stated that they plan to develop software that will track the number of citations drivers for each carrier receive. However, states must first agree on a standard format for collecting and reporting citations, and OMCHS does not yet have an estimated date for implementing its plan to use driver citations as a targeting mechanism. Because of the large contribution of car driver errors to fatal crashes between large trucks and cars, OMCHS launched the “No-Zone” campaign in 1994. (“No-Zone” is a term used to describe the areas around a truck where the truck driver’s visibility is limited.) This campaign is intended to reduce crashes between large trucks and cars by educating car drivers about how to safely share the road with large trucks and about trucks’ limitations, such as reduced maneuverability, longer stopping distances, and blind spots. The campaign’s public education efforts include public service announcements via radio, television, and print; brochures; posters; and decals on large trucks. Because car drivers between 15 and 20 years old were found to be involved in a relatively high percentage of fatal crashes, the “No-Zone” campaign focused a large part of its public outreach on this age group. The campaign has a goal of reducing fatal crashes involving large trucks and cars by 10 percent over a 5-year period. However, as evidenced by the overall increase in the number of fatalities since 1994, the campaign apparently did not make any progress toward achieving its goal through 1997, the last year for which data are available. OMCHS has not determined to what extent, if any, the “No-Zone” campaign has contributed to changing car drivers’ behavior and reducing crashes between large trucks and cars. While OMCHS plans to conduct a national telephone survey within the next year to determine the level of public recognition of the “No-Zone” campaign, the survey will not measure whether car drivers’ behavior has changed. These findings summarize our work to date. We are continuing our review of the effectiveness of OMCHS for this Subcommittee. Mr. Chairman, this concludes my statement. I will be pleased to answer any questions that you or Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the safety of large commercial trucks on the nation's highways, focusing on: (1) trends in crashes involving large trucks; (2) factors that contribute to such crashes; and (3) the Federal Highway Administration's Office of Motor Carrier and Highway Safety's (OMCHS) activities to improve the safety of large trucks. GAO noted that: (1) of the nearly 42,000 people who died on the nation's highways in 1997, about 5,400 died from crashes involving large trucks; (2) this represents a 20 percent increase from 1992; (3) at the same time, the annual number of miles traveled by large trucks increased by a similar proportion; (4) if this trend of increasing truck travel continues, the number of fatalities could increase to 5,800 in 1999 and to more than 6,000 in 2000; (5) while trucks are involved in fewer crashes per mile traveled than are cars, crashes involving trucks are more likely to result in a fatality; (6) in 1997, 98 percent of the fatalities from crashes between trucks and cars were the occupants of the car; (7) although no definitive information on the causes of crashes involving large trucks exists, several factors contribute to these crashes; (8) these contributing factors include errors on the part of car and truck drivers, truck driver fatigue, and vehicle defects; (9) of these factors, errors on the part of car drivers are cited most frequently as contributing to crashes involving large trucks; (10) specifically, errors by car drivers were reported in 80 percent of the crashes, while truck drivers errors were reported in 28 percent of the crashes; (11) while many factors outside OMCHS' authority--such as the use of safety belts by car occupants and states' actions--influence the number of fatalities that result from crashes involving large trucks, the Federal Highway Administration has established a goal for 1999 of reducing these fatalities; (12) its goal is to reduce the number of fatalities to below the 1996 level of 5,126--substantially less than the projected figure of 5,800; (13) OMCHS has undertaken a number of activities intended to achieve this goal, such as identifying high-risk carriers for safety improvements and educating car drivers about how to share the road with large trucks; and (14) however, OMCHS is unlikely to reach the goal because: (a) its initiative to target high-risk carriers for safety improvements depends on data that are not complete, accurate, or timely; (b) several activities will not be completed before the end of 1999; and (c) the effectiveness of OMCHS' educational campaign to improve car drivers' behavior is unknown.
FPA includes several provisions designed to protect fish, wildlife, and the environment from the potentially damaging effects of a hydropower project’s operations. Specifically: Section 4(e) states that licenses for projects on federal lands reserved by Congress for other purposes—such as national forests—are subject to the mandatory conditions set by federal resource agencies, including the Forest Service and the Bureau of Indian Affairs, Bureau of Land Management, Bureau of Reclamation, and FWS. Section 10(a) requires FERC to solicit recommendations from federal and state resource agencies and Indian tribes affected by a hydropower project’s operation on the terms and conditions to be proposed for inclusion in a license. Section 10(j) authorizes federal and state fish and wildlife agencies to recommend license conditions to benefit fish and wildlife. FERC must include section 10(j) recommendations in the hydropower licenses unless it (1) finds them to be inconsistent with law and (2) has already established license conditions that adequately protect fish and wildlife. Section 18 requires FERC to include license prescriptions for fish passage prescribed by resource agencies, such as FWS and NMFS. Under section 241 and the interim rules, licensees and other nonfederal stakeholders may request a trial-type hearing with duration of up to 90 days on any disputed issue of material fact with respect to a preliminary condition or prescription. An administrative law judge (ALJ), referred by the relevant resource agency, must resolve all disputed issues of material fact related to an agency’s preliminary conditions or prescriptions in a single hearing. The interim rules contain procedures for consolidating multiple hearing requests involving the same project. Under section 241 and the interim rules, licensees and other nonfederal stakeholders may also propose alternatives to the preliminary conditions or prescriptions proposed by the resource agencies. Under section 241, resource agencies are required to adopt the alternatives if the agency determines that they adequately protect the federal land and either cost significantly less to implement or result in improved electricity production. If the alternatives do not meet these criteria, the agencies may reject them. In either case, under section 241, resource agencies must formally submit a statement to FERC explaining the basis for any condition or prescription the agency adopts and reason for not accepting any alternative under this section. The statement must demonstrate that the Secretary of the department gave equal consideration to the effects of the alternatives on energy supply, distribution, cost, and use; flood control; navigation; water supply; and air quality (in addition to the preservation of other aspects of environmental quality). In addition, the resource agencies often negotiate with the stakeholders who submitted the alternatives and settle on modifications of the agencies’ preliminary conditions and prescriptions. FPA requires licensees to pay reasonable annual charges in amounts fixed by FERC to reimburse the United States for, among other things, the costs of FERC’s and other federal agencies’ administration of the act’s hydropower provisions. To identify these costs—virtually all of which are related to the relicensing process—FERC annually requests federal agencies to report their costs related to the hydropower program for the prior fiscal year. FERC then bills individual licensees for their share of FERC’s and the other federal agencies’ administrative costs, basing these shares largely on the generating capacity and amount of electricity generated by the licensees’ projects. FERC deposits the licensees’ reimbursements—together with other annual charges and filing fees that it collects—into the U.S. Treasury as a direct offset to its annual appropriation. Receipts that exceed FERC’s annual appropriation are deposited in the General Fund of the U.S. Treasury. Nonfederal stakeholders—licensees, states, environmental groups, and an Indian tribe—used the section 241 provisions for 25 of the 103 (24 percent) eligible hydropower projects being relicensed, although the use of these provisions has decreased since its first year. In response to the use of these provisions, resource agencies modified most of the conditions and prescriptions that they had originally proposed. In addition, trial-type hearings were completed for three projects, with the resource agencies prevailing in most of the issues in these hearings. From November 17, 2005, through May 17, 2010, 103 hydropower projects being relicensed, including 49 transition projects, were eligible for nonfederal stakeholders to use the section 241 provisions to submit alternative conditions or prescriptions or request a trial-type hearing. Nonfederal stakeholders have used the provisions for 25 of these 103 projects, including 15 of the 49 transition projects. Table 1 shows the 25 projects, the nonfederal stakeholder proposing alternatives, the affected federal resource agency, and whether the stakeholder requested a trial- type hearing. In each of these projects, the licensee submitted one or more alternatives. In addition, in the DeSabla-Centerville, Klamath, and McCloud-Pit projects, stakeholders other than the licensee also submitted alternatives. The use of section 241 provisions has decreased since the first year. In fiscal year 2006, nonfederal stakeholders used section 241 provisions for 19 projects undergoing relicensing. By comparison, after fiscal year 2006, nonfederal stakeholders used the provisions for only 6 projects. Fifteen of the 19 projects in which stakeholders used the provisions in fiscal year 2006 were transition projects. These transition projects included 11 projects that had expired original licenses and were operating on annual licenses at the time that the interim rules were implemented, which helped create the initial surge of projects eligible to use section 241. As table 2 shows, the number of eligible nontransition projects—projects that had received preliminary conditions and prescriptions from federal resource agencies after section 241 was enacted—for which nonfederal stakeholders have sought to use section 241 provisions has declined since the first year. However, the number of nontransition projects becoming subject to these provisions has not widely varied. Licensees and other nonfederal stakeholders had proposed a total of 211 alternatives—194 alternative conditions and 17 alternative prescriptions—for the 25 projects where section 241 provisions were used. However, these numbers do not necessarily reflect the number of issues considered because section 4(e) conditions and section 18 fishway prescriptions are counted differently. For example, a resource agency may issue a section 4(e) condition for each part of a particular topic. However, NMFS or FWS will typically issue single section 18 fishway prescriptions with multiple sections. Of the 25 projects, stakeholders proposed alternative conditions for 19 and alternative prescriptions for 9. Table 3 provides the number of alternative conditions proposed, accepted, rejected, and pending, and the number of preliminary conditions modified or removed for 19 of the 25 projects. Table 4 provides the number of alternative prescriptions proposed, accepted, rejected, and pending and the number of preliminary prescriptions modified or removed in settlement for 9 of the 25 projects. As the tables show, instead of accepting or rejecting alternative conditions and prescriptions, resource agencies most frequently modified the original conditions and prescriptions in settlement negotiations with the nonfederal stakeholders. In all, resource agencies did not formally accept any alternatives as originally proposed and instead modified a total of 140 preliminary conditions and prescriptions for 22 of rejected a total of 42 alternative conditions and prescriptions in 5 projects, and removed a total of 9 preliminary conditions and prescriptions in 4 projects. Licensees submitted 204 of the 211 alternative conditions and prescriptions. State agencies or nongovernmental organizations submitted the remaining 7 alternative conditions, 4 of which were rejected by the resource agencies, and 3 were being considered as of May 17, 2010. Section 241 directs the Secretary of the relevant resource agency to explain the basis for any condition or prescription the agency adopts, provide a reason for not accepting any alternative condition under this section, and demonstrate that it gave equal consideration to the effects of the alternatives on energy supply, distribution, cost, and use; flood control; navigation; water supply; and air quality (in addition to the preservation of other aspects of environmental quality). Similarly, the agencies’ interim rules provide, “The written statement must explain the basis for the modified conditions or prescriptions and, if the Department did not accept an alternative condition or prescription, its reasons for not doing so.” While the agencies provided an explanation for rejecting all 42 alternative conditions and prescriptions, they did not explain the reasons for not accepting a proposed alternative for 127 of the 140 modified conditions and prescriptions. Without an explanation, it is difficult to determine the extent, type, or basis of changes that were made and difficult to determine if and how the proposed alternatives affected the final conditions and prescriptions issued by the agencies. As of May 17, 2010, nonfederal stakeholders requested trial-type hearings for 18 of the 25 projects in which the section 241 provisions were used, and 3 trial-type hearings were completed. Most of these requests were made by licensees. The requests for hearings in 14 of the 18 projects were withdrawn when nonfederal stakeholders and resource agencies reached a settlement agreement before the ALJ made a ruling, and 1 request is pending as of May 17, 2010, because the licensee is in negotiations to decommission the project. Prior to a trial-type hearing, an ALJ holds a prehearing conference to identify, narrow, and clarify the disputed issues of material fact. The ALJ must issue an order that recites any agreements reached at the conference and any rulings made by the ALJ during or as a result of the prehearing conference, which can include dismissing issues the ALJ determines are not disputed issues of material fact. For the three projects that have completed trial-type hearings, the number of issues in these projects was reduced from 96 to 37 after prehearing conferences. In addition, in a fourth project in which the federal resource agencies and the licensee eventually reached a settlement before going to a hearing, the number of issues was reduced from 13 to 1 after the prehearing conference. As table 5 shows, the three trial-type hearings were held for the Klamath project, in California and Oregon; the Spokane River project, in Idaho and Washington; and the Tacoma project, in Colorado, all of which are nontransition projects. In addition to the licensees requesting hearings, one nongovernmental organization and one tribe requested a hearing for the Klamath project. The Spokane River and Tacoma hearings were completed in 90 days, the time allotted by the interim rule, while Klamath required 97 days. As table 5 shows, of the 37 issues presented, the ALJ ruled in favor of the federal resource agency on 25 issues, ruled in favor of the licensee on 6 issues, and offered a split decision on 6 issues. According to the relicensing stakeholders we spoke with, section 241 provisions have had a variety of effects on relicensing in three areas: (1) settlement agreements between licensees and resource agencies, (2) conditions and prescriptions that the resource agencies set, and (3) agencies’ workload and cost. Most licensees and a few resource agency officials that we spoke with said that section 241 encourages settlement agreements between the licensee and resource agency. In contrast, other agency officials we spoke with said that section 241 made the relicensing process more difficult to reach a settlement agreement with the licensee. Regarding conditions and prescriptions, some stakeholders commented that under section 241, resource agencies generally researched their conditions and prescriptions more thoroughly, while all seven of the environmental groups’ representatives and some resource agency officials we spoke with said that resource agencies issued fewer or less environmentally protective conditions and prescriptions. Resource agency officials also raised concerns about increases in workload and costs as a result of section 241. Finally, many of the stakeholders offered suggestions for improving the use of section 241. Most of the licensees and a few resource agency officials we spoke with said that section 241 encourages settlement agreements between the licensee and resource agency. Several licensees commented that before section 241 was enacted, they had little influence on the mandatory conditions and prescriptions and that the resource agencies had made decisions on which conditions and prescriptions to issue without the potential oversight of a third-party review. One licensee commented that resource agencies had little incentive to work collaboratively with the licensee during relicensing prior to section 241. Several licensees and a few resource agency officials said that under section 241, some resource agencies have been more willing to negotiate their conditions and prescriptions to avoid receiving alternatives and requests for trial-type hearings. Some resource agency officials, however, said that in some cases, reaching a settlement with the licensee has been more difficult under section 241 than in previous negotiations. Specifically, they noted the following: If licensees request a trial-type hearing, resource agencies and licensees have to devote time and resources to preparing for the potential upcoming trial-type hearing instead of negotiating a settlement. Section 241 made the relicensing process less cooperative and more antagonistic when, for example, a licensee did not conduct the agencies’ requested studies, the agencies had less information to support their conditions and prescriptions. As a case in point, one NMFS regional supervisor told us that a licensee declined to conduct a study about the effects of its dams’ turbines on fish mortality. However, the licensee subsequently requested a trial-type hearing because, it argued, the agency had no factual evidence to support the agency’s assertion that the turbines injured or killed fish. Some licensees used their ability to request a trial-type hearing as a threat against the agencies’ issuance of certain conditions, prescriptions, or recommendations. For example, two NMFS biologists and their division chief told us that a licensee had threatened to issue a trial-type hearing request on fish passage prescriptions if NMFS made flow rate recommendations that it did not agree with. The Hydropower Reform Coalition, a coalition of conservation and recreational organizations, commented that from its experience, participation in settlement negotiations under section 241 is “almost exclusively limited to licensees.” It also commented that agreements reached by the license applicant and resource agency are not comprehensive settlement agreements in which licensees, state and federal resource agencies, tribes, nongovernmental organizations, and other interested parties are involved in the agreement. Some licensees said agencies now put more effort into reviewing and providing support for their conditions and prescriptions because licensees or other nonfederal stakeholders could challenge the terms in a trial-type hearing. Several agency officials commented that they generally conduct more thorough research and provide a more extensive explanation about mandatory conditions and prescriptions than they had for projects prior to section 241. A few agency officials also commented they are requesting licensees to conduct more extensive studies about the effects of their hydropower projects to ensure that the agencies have sufficient information for writing conditions and prescriptions. Views differed on whether conditions and prescriptions were as protective or less protective since section 241 was enacted. All seven environmental group representatives that we spoke with expressed concerns that resource agencies were excluding and writing less protective conditions, prescriptions, and recommendations to avoid trial-type hearings. For example, one group commented that in one hydropower project, under section 241, agency officials settled for stream flow rates that were lower than necessary for protecting and restoring the spawning habitat for fish that swam in the project area. Some agency officials said the conditions and prescriptions they have issued are as protective as those issued prior to the enactment of section 241. Others said that they now issue fewer or less environmentally protective conditions or prescriptions to avoid a costly trial-type hearing. In addition, some other officials commented that instead of issuing conditions and prescriptions that could result in a trial- type hearing, agencies have either issued recommendations or reserved authority to issue conditions and prescriptions at a later time. While a reservation of authority allows the resource agency to issue conditions and prescriptions after the issuance of the license, one regional agency official told us that in his experience, this rarely occurs. At one regional office, two staff biologists and their division chief told us that while they still issue prescriptions that meet the requirements of resource protection, these prescriptions are less protective than they would have been without the possibility of a trial-type hearing. Many agency officials said that the added efforts they put into each license application since the passage of section 241 has greatly increased their workloads for relicensing. Several agency officials also told us that even greater efforts are needed when a trial-type hearing is requested. To complete the work needed for a trial-type hearing, agencies often need to pull staff from other projects. According to these officials, at the local level, pulling staff from other projects can result in the agency’s neglect of its other responsibilities. Officials commented that whether they win or lose a trial-type hearing, agencies must provide the funding for an ALJ, expert witnesses, and their attorneys at a trial-type hearing. Although they did not track all costs, the Bureau of Indian Affairs, Bureau of Land Management, Interior’s Office of the Solicitor, FWS, Forest Service, and NMFS provided individual estimates that totaled to approximately $3.1 million in trial-type hearings for the following three projects: Approximately $300,000 for the Tacoma project. Approximately $800,000 for the Spokane River project. Approximately $2 million for the Klamath project. Among all the resource agencies, only NMFS has dedicated funding for section 241 activities. However, this funding only covers administrative costs related to a trial-type hearing and does not fund NMFS’s program staff or General Counsel staff for a hearing. Many of the agency officials, licensees, and other stakeholders we spoke with had suggestions on how to improve section 241 and the relicensing process. For example, several licensees and agency officials raised concerns that the 90-day period for a trial-type hearing, including a decision, was too short and resulted in the need to complete an enormous amount of work in a compressed time frame. Some said that an ALJ who did not have a background in hydropower issues needed more time to review the information presented following the hearing. Some stakeholders suggested allowing the ALJ to make his or her decision outside of the 90-day period. Other stakeholders, however, commented that an extension of the 90-day period could result in greater costs for all parties. One regional hydrologist suggested using a scientific peer review panel rather than an ALJ to hear arguments. Some stakeholders also suggested providing an opportunity to delay the start date of a trial-type hearing if all parties were close to reaching a settlement. The stakeholders we spoke with also had several suggestions that were specific to their interests, which included the following: A couple of licensees noted that while the provisions of section 241 may be used after preliminary conditions and prescriptions are issued, they would like to be able to use these provisions after the issuance of final conditions and prescriptions because of concerns that the final conditions and prescriptions could differ from the agreed-upon terms that were arrived at through negotiations. These licensees assert that if they do not have this option, their only recourse is to sue in an appeals court, after the license has been issued. These licensees were not aware of any instance in which the terms had drastically changed between negotiations and the issuance of the final license. Several environmental group representatives commented that while section 241 allows stakeholders to propose alternative conditions and prescriptions, they would like to be allowed to propose additional conditions and prescriptions to address issues that the resource agencies have not addressed in their preliminary conditions and prescriptions. Three of these representatives also commented that the section 241 criteria for the acceptance of an alternative—adequately or no less protective and costs less to implement—favored licensees, not conservation groups. Instead, one representative suggested that the criterion for an alternative should be that it is more appropriately protective and not that it costs less to implement. In addition, another representative suggested that all interested parties should be allowed to participate in negotiations to modify the preliminary conditions and prescriptions after the submission of an alternative. In his experience, these negotiations have been limited to the stakeholder who uses the provisions of section 241 and the resource agency. A few resource agency officials suggested that licensees who lose the trial- type hearing should pay court costs, such as the costs of the ALJ. They also suggested that licensee reimbursements for the relicensing costs go directly to the resource agencies rather than the General Fund of the U.S. Treasury. Almost 5 years have passed since the interim rules were issued, and several stakeholders that we spoke with expressed interest in having an opportunity to comment on a draft of the revised rules when they become available and before these rules become final. In addition, on June 2, 2009, the National Hydropower Association—an industry trade group—and the Hydropower Reform Coalition submitted a joint letter addressed to Interior, NMFS, and USDA expressing interest in an opportunity to comment on the revised rules before they become final. Section 241 of the Energy Policy Act of 2005 changed the hydropower relicensing process, including permitting licensees and other nonfederal stakeholders to propose alternative conditions and prescriptions. All parties involved in relicensing a hydropower project have an interest in understanding how the conditions and prescriptions for a license were modified, if at all, in response to proposed alternatives. Indeed, the interim rules require agencies to provide, for any condition or prescription, a written statement explaining the basis for the adopted condition and the reasons for not accepting any alternative condition or prescription. While we found that the agencies have provided a written explanation for all 42 rejected conditions and prescriptions, they provided a written explanation of the reasons for not accepting a proposed alternative for only 13 of the 140 modified conditions and prescriptions. The absence of an explanation makes it difficult to determine the extent or type of changes that were made. Furthermore, when the interim rules that implemented section 241 were issued on November 17, 2005, the federal resource agencies stated that they would consider issuing final rules 18 months later. Instead, nearly 5 years later, final rules have not yet been issued. Given this delay and the amount of experience with section 241’s interim rules, many stakeholders we spoke with had ideas on how to improve section 241 and several expressed interest in providing comments when a draft of the final rules becomes available. To encourage transparency in the process for relicensing hydropower projects, we are recommending that the Secretaries of Agriculture, Commerce, and the Interior take the following two actions: Direct cognizant officials, where the agency has not adopted a proposed alternative condition or prescription, to include in the written statement filed with FERC (1) its reasons for not doing so, in accordance with the interim rules and (2) whether a proposed alternative was withdrawn as a result of negotiations and an explanation of what occurred subsequent to the withdrawal; and Issue final rules governing the use of the section 241 provisions after providing an additional period for notice and an opportunity for public comment and after considering their own lessons learned from their experience with the interim rules. We provided the departments of Agriculture, Commerce, and the Interior; FERC; the Hydropower Reform Coalition; and the National Hydropower Association with a draft of this report for their review and comment. FERC had no comments on the report. Commerce’s National Oceanic and Atmospheric Administration (NOAA), Interior, USDA’s Forest Service, the Hydropower Reform Coalition, and the National Hydropower Association provided comments on the report and generally agreed with the report’s recommendations. While Forest Service, Interior, and NOAA generally agreed with our recommendation that they file a written statement with FERC on their reasons for not accepting a proposed alternative, they all cited a circumstance in which they believed that they were not required to do so. Specifically, the three agencies commented that under the interim rules, they do believe that they are required to explain their reasons for not accepting a proposed alternative when the alternatives were withdrawn as a result of negotiations. Two of the agencies, Interior and NOAA, agreed to indicate when a proposed alternative was voluntarily withdrawn, and NOAA acknowledged that providing an explanation on what occurred after the withdrawal of an alternative may be appropriate in some circumstances. We continue to believe that providing an explanation for not accepting a proposed alternative is warranted, even when the proposed alternative is voluntarily withdrawn as a result of negotiations, and we have modified our recommendation to address this situation. The agencies could add transparency to the settlement process by laying out the basis for the modifications made to the preliminary conditions and prescriptions; the reasons the agencies had for not accepting the proposed alternative, including those alternatives withdrawn as a result of negotiations; and an explanation of what occurred subsequent to the withdrawal. Further, no provision of the interim rules discusses withdrawal of proposed alternatives or provides an exemption from the requirement to explain why a proposed alternative was not accepted. The agencies have an opportunity to clarify their approach to withdrawn conditions and prescriptions as they consider revisions to the interim rules. Interior and NOAA commented that they agreed with our recommendation regarding the issuance of final rules and are considering providing an additional public comment opportunity. According to Interior and NOAA, the resource agencies are currently working on possible revisions to the interim rules. NOAA also commented that resource agencies use the term “modified prescription” as a “term of art” to refer to the agencies’ final prescription, regardless of whether the final prescription actually differs from the preliminary one. As we noted in table 4 of this report, we counted a preliminary prescription as modified if the resource agency does not explicitly accept or reject the proposed alternative. In response to this comment, we added an additional clarifying footnote in the report. Interior suggested that we clarify in our report that agencies have no reason to write less protective recommendations because recommendations cannot be the basis for trial-type hearing requests. We did not change the language in our report because we believe that Interior’s assertion that agencies have no reason to write less protective recommendations may not always be the case. For example, as stated in our report, NMFS officials told us that a licensee had threatened to issue a trial-type hearing request on fish passage prescriptions if NMFS made flow rate recommendations that it did not agree with. The Hydropower Reform Coalition suggested that we collect additional information and conduct further analysis on the use of the section 241 provisions. We did not gather the suggested additional information or conduct additional analysis because in our view, they fell outside of the scope and methodology of our report. Appendixes I, II, III, IV, and V present the agencies’, the Hydropower Reform Coalition’s, and the National Hydropower Association’s comments respectively. Interior, NOAA, and the Hydropower Reform Coalition also provided technical comments, which we incorporated into the report as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees; the Secretaries of Agriculture, Commerce, and the Interior; the Chairman of the Federal Energy Regulatory Commission; and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed are listed in appendix VI. In addition to the contact named above, Ned Woodward, Assistant Director; Allen Chan; Jeremy Conley; Richard Johnson; Carol Herrnstadt Shulman; Jay Smale; and Kiki Theodoropoulos made key contributions to this report.
Under the Federal Power Act, the Federal Energy Regulatory Commission (FERC) issues licenses for up to 50 years to construct and operate nonfederal hydropower projects. These projects must be relicensed when their licenses expire to continue operating. Relevant federal resource agencies issue license conditions to protect federal lands and prescriptions to assist fish passage on these projects. Under section 241 of the Energy Policy Act of 2005, parties to the licensing process may (1) request a "trial-type hearing" on any disputed issue of material fact related to a condition or prescription and (2) propose alternative conditions or prescriptions. In this context, GAO was asked to (1) determine the extent to which stakeholders have used section 241 provisions in relicensing and their outcomes and (2) describe stakeholders' views on section 241's impact on relicensing and conditions and prescriptions. GAO analyzed relicensing documents filed with FERC and conducted a total of 61 interviews with representatives from relevant federal resource agencies, FERC, licensees, tribal groups, industry groups, and environmental groups. Since the passage of the Energy Policy Act in 2005, nonfederal stakeholders--licensees, states, environmental groups, and an Indian tribe--used section 241 provisions for 25 of the 103 eligible hydropower projects being relicensed, most of which occurred within the first year. Of these 25 projects, stakeholders proposed a total of 211 alternative conditions and prescriptions. In response, the federal resource agencies (U.S. Department of Agriculture's Forest Service, Department of Commerce's National Marine Fisheries Service, and several bureaus in the Department of the Interior) accepted no alternatives as originally proposed but instead modified a total of 140 and removed a total of 9 of the agencies' preliminary conditions and prescriptions and rejected 42 of the 211 alternatives; the remaining alternatives are pending as of May 17, 2010. Under section 241, resource agencies must submit a statement to FERC explaining the basis for accepting or rejecting a proposed alternative. While agencies generally provided explanations for rejecting alternative conditions and prescriptions, with few exceptions, they did not explain the reasons for not accepting alternatives when they modified conditions and prescriptions. As a result, it is difficult to determine the extent, type, or basis of changes that were made and difficult to determine if and how the proposed alternatives affected the final conditions and prescriptions issued by the agencies. As of May 17, 2010, nonfederal stakeholders requested trial-type hearings for 18 of the 25 projects in which section 241 provisions were used, and three trial-type hearings were completed. Of the remaining 15 projects, requests for hearings were withdrawn for 14 of them when licensees and agencies negotiated a settlement agreement before the administrative law judge made a ruling, and one is pending because the licensee is in negotiations to decommission the project. In the three hearings held to date, the administrative law judge ruled in favor of the agencies on most issues. According to the federal and nonfederal relicensing stakeholders GAO spoke with, the section 241 provisions have had a variety of effects on the relicensing process and on the license conditions and prescriptions. While most licensees and a few agency officials said that section 241 encourages settlement agreements between the licensee and resource agency, some agency officials said that section 241 made agreements more difficult because efforts to negotiate have moved to preparing for potential hearings. Regarding conditions and prescriptions, some stakeholders commented that under section 241, agencies put more effort into reviewing and providing support for their conditions and prescriptions, but environmental groups and some agency officials said that in their opinion, agencies issued fewer or less environmentally protective conditions and prescriptions. Many agency officials also raised concerns about increases in workload and costs as a result of section 241. For example, their estimated costs for the three hearings to date totaled approximately $3.1 million. Furthermore, many of the stakeholders offered suggestions for improving the use of section 241, including adjusting the time frame for a trial-type hearing. GAO recommends that cognizant officials who do not adopt a proposed alternative include reasons why in their statement to FERC. The resource agencies generally agreed, but commented that no explanation is required when an alternative is withdrawn as a result of negotiations.
VA operates the largest integrated health care system in the United States, providing care to nearly 5 million veterans per year. The VA health care system consists of hospitals, ambulatory clinics, nursing homes, residential rehabilitation treatment programs, and readjustment counseling centers. VA delegates decision making regarding financing, health care service delivery, and medical facility operations to its 21 networks. Physicians who work at VA medical facilities are required to hold at least one current and unrestricted state medical license. Current and unrestricted licenses are those in good standing in the states that issued them, and licensed physicians may hold licenses from more than one state. State medical licenses are issued by state licensing boards, which generally establish state licensing requirements governing their licensed practitioners. To keep licenses current, physicians must renew their licenses before they expire and meet renewal requirements established by state licensing boards, such as continuing education. Renewal procedures and requirements vary by state. When state licensing boards discover violations of licensing practices, such as the abuse of prescription drugs or the provision of substandard care that results in adverse health effects, they may place restrictions on licenses or revoke them. Restrictions issued by a state licensing board can limit or prohibit a physician from practicing in that particular state. Generally, state licensing boards maintain a database that contains information on any restrictions or revocations of physicians’ licenses. When physicians apply for initial appointment, they initiate the credentialing process by completing VA’s application, which includes entering into VetPro—a Web-based credentialing system VA implemented in March 2001—information used by VA medical facility officials in the credentialing process. Among the credentialing information that VA requires physicians enter into VetPro is information on all the state medical licenses they have ever held, including any licenses they have held that have expired. For their reappointments, physicians must update this credentialing information in VetPro. Once physicians enter their credentialing information into VetPro, a facility’s medical staff specialist—an employee who is responsible for obtaining and verifying the information used in the credentialing and privileging processes—performs a data check on the information to be sure that all required information has been entered. In general, the medical staff specialist at each VA medical facility manages the accuracy of VetPro’s credentialing data. The medical staff specialist verifies, with the original source of the information, the accuracy of the credentialing information entered by the physicians. This type of check is known as primary source verification. For example, the medical staff specialist contacts state licensing boards in order to verify that physicians’ state medical licenses are valid and unrestricted. At initial appointment only, VA requires medical staff specialists to query FSMB, which contains information from state licensing boards. This query enables officials to determine all the state medical licenses a physician has ever held, including those not disclosed by a physician to VA, and whether a physician has had any disciplinary actions taken against these licenses. VA does not require this query at reappointment because VA headquarters regularly receives reports from FSMB on any VA physician whose name appears on FSMB’s list, indicating that disciplinary action has been taken against the physician’s state medical license. When VA headquarters receives a report from FSMB, it notifies the appropriate VA medical facility. VA’s credentialing process requires VA medical staff specialists to verify medical malpractice claims at initial appointment and at reappointment. These claims may be verified by contacting a court of jurisdiction or the insurance company involved in the medical malpractice claims, or by obtaining a statement of claims status from the attorney representing the physician in the medical malpractice claim. In addition, VA requires medical staff specialists to query NPDB, which contains reports by state licensing boards, hospitals, and other health care entities on unprofessional behavior on the part of physicians or adverse actions taken against them. This query enables officials to determine whether physicians fully disclosed to VA any involvement they might have had in paid medical malpractice claims. Once a physician’s credentialing information has been verified, the medical staff specialist sends the information to the physician’s supervisor, who is known as a clinical service chief. The clinical service chief reviews this information along with the physician’s privileging information. Figure 1 illustrates VA’s credentialing process. Physicians, in addition to entering credentialing information into VetPro, must complete a written request for clinical privileges. The facility medical staff specialist provides a physician’s clinical service chief with the physician’s requested clinical privileges and information needed to complete the privileging process, including information that indicates that the credentialing information entered by the physician into VetPro has been verified with the appropriate sources. For reappointment, documentation is required by another physician stating that the physician is able to perform both physically and mentally the clinical privileges requested. In addition, the medical staff specialist provides the clinical service chief with information on medical malpractice allegations or paid claims, loss of medical staff membership, loss or reduction of clinical privileges, or any challenges to the physician’s state medical licenses. The requested clinical privileges are reviewed by a clinical service chief, who recommends whether a physician should be appointed or reappointed to the facility’s medical staff and which clinical privileges should be granted. When deciding to recommend clinical privileges, a clinical service chief considers whether the physician has the appropriate professional credentials, training, and work experience to perform the privileges requested. For reappointment only, a clinical service chief is to consider observations of the physician’s delivery of health care to veterans, and VA’s policy requires that information on a physician’s performance, such as a physician’s surgical complication rate, be used when deciding whether to renew a physician’s clinical privileges. Based on the clinical service chief’s observations and the physician’s performance information, the clinical service chief recommends that clinical privileges previously granted by the facility remain the same, be reduced, or be revoked, and whether newly requested privileges should be added. Clinical service chiefs forward their recommendations and the reasons for the recommendations to the next level of a medical facility’s privileging review process, which may be a professional standards board or a medical executive committee. A medical facility professional standards board or the medical executive committee reviews the recommendations of the clinical service chief and recommends to the facility director whether the physician should be appointed to the facility’s medical staff and which clinical privileges should be granted to the physician. The 2-year time period for renewal of clinical privileges and reappointment to the medical staff begins on the date that the privileges are approved by the medical facility’s director. The list of approved clinical privileges with the date of approval is maintained at VA medical facilities and the initial appointment or reappointment date is entered into VetPro. Figure 2 illustrates VA’s privileging process. According to VA’s policy and a VA memorandum, information concerning individual physician performance that is used as part of the privileging process to either reduce, revoke, or support granting clinical privileges must be collected separately from a medical facility’s quality assurance program. VA’s policy is based on a federal law that restricts the disclosure of documents produced in the course of VA’s quality assurance program. In general, documents created in connection with such a program are confidential and may not be disclosed except in limited circumstances. Individuals who willfully disclose documents that they know are protected quality assurance documents are subject to fines up to $20,000. Although the law states that it is not intended to limit the use of documents within VA, VA’s policy expressly prohibits the use of such documents in connection with the privileging process. VA’s use of separate information sources for quality assurance and privileging decisions is intended to maintain the confidential status of documents produced in connection with quality assurance programs. According to VA, the confidentiality of individual performance information helps ensure provider participation, including physicians, in a medical facility’s quality assurance program by encouraging providers to openly discuss opportunities for improvement in provider practice without fear of punitive action. VA has another requirement that is related to the renewal of physicians’ clinical privileges. Medical facility officials are required to submit to VA’s Office of Medical-Legal Affairs information on paid VA malpractice claims. This information must be submitted within 60 days after the medical facility is notified about a paid malpractice claim. The Office of Medical- Legal Affairs is responsible for convening a panel of clinicians to determine whether a VA facility physician involved in the claim delivered substandard care. The Office of Medical-Legal Affairs notifies the medical facility director of the results of its review. If it is determined that the physician delivered substandard care to veterans, the medical facility must report the physician to NPDB within 30 days of being notified of the decision. VA medical facility officials also would use this determination to decide whether to grant clinical privileges to the physician involved in the VA medical malpractice claim. In our 2006 report, we found that the physician files at the seven facilities we visited demonstrated compliance with four VA credentialing and four privileging requirements we reviewed. However, we found that there were problems complying with a fifth privileging requirement—to use information on a physician’s performance in making privileging decisions. We also found during our review that three of the seven medical facilities we visited did not submit to VA’s Office of Medical-Legal Affairs information on paid VA medical malpractice claims within 60 days after being notified that a claim was paid, as required by VA policy. Further, VA had not required its medical facilities to establish internal controls to help ensure that privileging information managed by medical staff specialists is accurate. Internal controls are important because at one facility we visited we found 106 physicians whose privileging process had not been completed by facility officials for at least 2 years because of inaccurate information. As a result, these physicians were practicing at the facility with expired clinical privileges. None of the VA medical facilities we visited for our 2006 report had internal controls in place that would prevent a similar situation from occurring. To better ensure that VA physicians are qualified to deliver care safely to veterans, we recommended that VA provide guidance to medical facilities on how to collect individual physician performance information in accordance with VA’s credentialing and privileging policy to use in medical facilities’ privileging process, enforce the requirement that medical facilities submit information on paid VA medical malpractice claims to VA’s Office of Medical-Legal Affairs within 60 days after being notified that the claim is paid, and instruct medical facilities to establish internal controls to ensure the accuracy of their privileging information. VA states that it has implemented all three recommendations we made in our May 2006 report to address compliance with VA’s physician privileging requirements by establishing policy and guidance for its medical facilities. However, we do not know the extent of compliance with these requirements at VA medical facilities. VA implemented our recommendation that VA provide guidance to VA medical facilities on how to appropriately collect information on individual physician performance and use that information in VA’s privileging process. Physician performance information is to be used to assist VA medical facility clinical service chiefs in determining the appropriate clinical privileges that should be granted based on a physician’s clinical competence. VA implemented our recommendation by issuing a policy on October 2, 2007, that elaborated on the sources of physician performance information and the types of information that could be collected outside of VA medical facilities’ quality assurance programs. In addition, in July 2007, VA officials told us that they were in the process of implementing online training programs on physician performance information to help implement our recommendation. The training will be mandatory for all VA medical facility clinical service chiefs and medical staff leaders responsible for the assessment and oversight of the privileging process and must be completed by January 31, 2008. VA also implemented our recommendation that it enforce its requirement that VA medical facilities report information on any paid VA malpractice claims involving their physicians to VA’s Office of Medical-Legal Affairs within 60 days after being notified of a paid claim. In June 2006, VA’s Office of Medical-Legal Affairs began notifying network and VA medical facility directors of delinquencies in reporting this information by the medical facilities. If a medical facility’s delinquency in reporting extends longer than 90 days, VA requires the Office of Medical-Legal Affairs to inform not only network and VA medical facility directors but also VA’s central office of the delinquency. Because VA’s Office of Medical-Legal Affairs reviews information on paid malpractice claims involving VA physicians to determine whether the physicians delivered substandard care, when VA medical facilities do not submit relevant malpractice claim information to this office, medical facility clinical service chiefs may make privileging decisions without complete information about substandard care provided by physicians. Further, VA implemented our recommendation that it instruct VA medical facilities to establish internal controls to ensure the accuracy of their privileging information. Internal controls help ensure that VA medical facility officials have accurate clinical privileging information and that physicians are not practicing at the facility with expired clinical privileges. To address our recommendation, VA first asked network directors to report on how they tracked the privileging status of VA physicians. In response to a VA memorandum sent on May 16, 2006, network directors provided a report indicating that their medical facilities had one or more mechanisms in place to identify physicians who were currently privileged at their facilities and to track whether their privileges have expired. In addition, VA instructed its network directors to monitor the internal controls at their facilities that ensure that VA medical facilities have accurate clinical privileging information and that physicians are not practicing with expired clinical privileges. Mr. Chairman, this concludes my prepared remarks. I will be pleased to answer any questions you or other members of the committee may have. For further information regarding this testimony, please contact Randall B. Williamson at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Marcia Mann, Assistant Director; Mary Ann Curran; Christina Enders; Krister Friday; Lori Fritz; Rebecca Hendrickson; and Jason Vassilicos also contributed to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In a report issued in May 2006, GAO examined compliance with the Department of Veterans Affairs' (VA) physician credentialing and privileging requirements at seven VA medical facilities GAO visited. VA's credentialing process is used to determine whether a physician's professional credentials, such as licensure, are valid and meet VA's requirements for employment. VA's privileging process is used to determine which health care services or clinical privileges, such as surgical procedures, a VA physician is qualified to provide to veterans without supervision. Although GAO cannot generalize from its findings, GAO found that the seven facilities were complying with credentialing requirements. However, the facilities were not complying with aspects of certain privileging requirements. To better ensure that VA physicians are qualified to deliver care safely to veterans, GAO made three recommendations to improve VA's privileging of physicians. GAO was asked to testify on (1) how VA credentials and privileges physicians working in its medical facilities and (2) the extent to which VA has implemented the three recommendations made in GAO's May 2006 report that address VA's privileging requirements. To update its issued work, GAO reviewed VA's policies, procedures, and correspondence related to physician privileging and interviewed VA central office officials to determine if the recommendations made in GAO's May 2006 report were implemented. The Department of Veterans' Affairs (VA) has specific requirements that medical facility officials must follow to credential and privilege physicians. VA requires its medical facility officials to credential and privilege facility physicians periodically so that they can continue to work at VA. Facility officials verify the information used in the credentialing process and query certain databases that contain information on disciplinary actions that have been taken against a physician's state medical license and have information about a physician's professional competence. Each physician also must complete a written request for clinical privileges that is reviewed by the physician's supervisor who considers whether the physician has the appropriate professional credentials, training, and work experience. In addition, every 2 years, the supervisor is to consider information on a physician's performance, such as a physician's surgical complication rate, when deciding whether to renew a physician's clinical privileges. In a May 2006, GAO examined compliance with VA's physician credentialing and privileging requirements at seven VA medical facilities it visited and made three recommendations designed to improve aspects of privileging and oversight of the process. The three recommendations were (1) to provide guidance to medical facilities on how to collect individual physician performance information in accordance with VA's credentialing and privileging policy to use in medical facilities' privileging process, (2) to enforce the requirement that medical facilities submit information on paid VA medical malpractice claims to VA within 60 days after being notified that the claim is paid, and (3) to instruct medical facilities to establish internal controls to ensure the accuracy of their privileging information. VA reports that it has implemented all three recommendations by establishing policy and guidance for its medical facilities. However, GAO does not know the extent of compliance with these requirements at VA medical facilities.
The United States, along with its coalition partners and various international organizations and donors, have made significant efforts to rebuild Iraq’s infrastructure and the capacity of its personnel. The United States alone has provided more than $40 billion since 2003, most of which has been obligated. The February 2007 U.S. strategy, The New Way Forward in Iraq, emphasizes a transition of responsibility for reconstruction to the Iraqi government. Iraq’s national government was established after a constitutional referendum in October 2005, followed by election of the first Council of Representatives (Parliament) in December 2005, and the selection of the first Prime Minister, Nuri Kamal al-Maliki, in May 2006. By mid-2006, the cabinet was approved; the government now has 34 ministries responsible for providing security and essential services—including electricity, water, and education—for the Iraqi people. Iraq’s Ministry of Finance plays the key role in developing, analyzing, and executing the budget, including distributing funds to individual spending units, and preparing periodic financial reports. Iraq’s financial management law directs the Ministry of Finance to consult with the Ministry of Planning and Development Cooperation in establishing budget funding priorities. Individual Iraqi spending units in the 34 central government ministries, the 15 provinces, and the Kurdistan region provide expenditure estimates to the Ministry of Finance. The Ministry of Finance, in consultation with the Ministry of Planning, uses this information to develop the budget and submits the draft budget to the Council of Ministers for approval before submitting it to the National Assembly for final approval. The Ministry of Planning is responsible for centralized project management support, including review and analysis of capital project plans and monitoring of contractor performance. As we reported in September 2007, the government of Iraq spent only 22 percent of its $6.2 billion capital projects budget in 2006 for the central government and Kurdistan (see fig. 1). The provinces received about $2 billion in 2006 funds for infrastructure and reconstruction projects, but these funds were included in the budget as transfers, rather than as part of the capital projects budget. The government of Iraq’s fiscal year 2007 budget allocates $10.1 billion for capital projects, including $6.4 billion for use by central government ministries, $2.1 billion for use by the provinces, and $1.6 billion for use by the semiautonomous Kurdistan region. However, we cannot determine the extent to which Iraq has spent these funds due to conflicting expenditure data. The U.S. government’s September 2007 benchmark report, citing unofficial Ministry of Finance data, stated that Iraqi ministries had spent 24 percent of their capital projects budgets, as of July 15, 2007. However, according to official Ministry of Finance expenditure data, Iraqi ministries had spent only 4.4 percent of their budgets for “nonfinancial assets,” or investment, as of August 31, 2007. Capital projects represent almost 90 percent of the investment budget, which combines capital projects and capital goods. In addition, the administration has relied on unofficial PRT data to track the provinces’ spending on capital projects and has reported the level of funds that the provinces have committed to projects as an indicator of spending. However, provinces had spent only 12 percent of their 2007 funds, as of October 2007, according to PRT reporting. Citing unofficial Ministry of Finance data, the administration’s September 2007 Benchmark Assessment Report stated that the Iraqi ministries had spent 24 percent of their capital projects budgets, as of July 15, 2007. The report concluded from these data that the government of Iraq is becoming more effective compared with 2006 in spending its capital projects budget. However, the unofficial data on percent of budget spent are significantly higher than official Iraqi expenditure data indicate. According to official reporting by the Ministry of Finance, as of August 31, 2007, Iraqi ministries spent only 4.4 percent of their 2007 investment budget (most of which is for capital projects). Table 1 compares the two sets of data. Although the 2007 Iraq budget has separate categories for capital goods and capital projects, the Iraqi government’s official expenditure data do not break out capital projects expenditures separately in 2007. To comply with new International Monetary Fund (IMF) budget classification requirements in 2007, the Iraqi government reports capital expenditures together under the heading of “nonfinancial assets,” which we refer to as investment in table 1. Capital projects represent almost 90 percent of the investment budget, combining capital projects and capital goods. Appendix II provides further analysis of 2007 Ministry of Finance expenditure data and comparable 2006 data. In reviewing our draft report, Treasury officials stated that our official figure of 4.4 percent excluded capital project spending found in other budget categories. However, Treasury could provide no documentation that would allow us to verify whether, or the extent to which this occurs. State Department made similar comments in reviewing our report. However, State was unable to provide us with supporting documentation and referred us to the Department of Treasury. Also, Treasury officials noted that the higher figure of 24 percent capital project spending could include commitments in addition to actual expenditures. The Ministry of Oil represents the largest share (24 percent) of the Iraqi government’s capital projects budget in 2007. According to the unofficial data reported by the U.S. administration, as of July 15, 2007, the Ministry of Oil spent $500 million on capital projects, which is 21 percent of the ministry’s $2.4 billion capital projects budget. This reported level of spending has already surpassed the ministry’s total for 2006; however, it is not consistent with the much lower level of spending reflected in official Ministry of Finance data through August (see table 1). According to the Special Inspector General for Iraq Reconstruction (SIGIR), U.S. officials stated that the ministry may not have spent all of these funds but instead shifted them to subsidiaries such as the State Oil Marketing Organization, which have responsibility for spending much of the oil ministry’s capital projects budget. The Iraqi government provided $2.1 billion, or over 20 percent of the 2007 capital projects budget, to the provinces (not including the semiautonomous Kurdistan region), in amounts proportional to their populations. These funds are in addition to approximately $2 billion in 2006 provincial funds, most of which had not been transferred to the provinces until November and December of 2006. Because of the late transfer, the provinces were permitted to carry over unspent 2006 funds. Additionally, the semiautonomous Kurdistan region received a separate 2007 budget allocation of $1.6 billion, or 16 percent of the total 2007 capital projects budget. To track capital projects budget execution by the provinces, the administration uses unofficial commitment and expenditure information collected by PRTs. The administration’s September 2007 benchmark report cited PRT commitment data as an indicator of successful budget execution by the provinces. The September report stated that provinces committed almost half of their 2007 capital projects budgets. However, the extent to which committed funds indicate actual spending is unknown. Given the capacity and security challenges currently facing Iraq, many committed contracts may not be executed and would not result in actual expenditures, according to U.S. agency officials. According to PRT reporting, the provinces committed 58 percent of their 2007 budget but had spent only 12 percent as of October 21, 2007. (For additional analysis of PRT data, see app. III.) U.S. officials noted that the provinces are still spending the 2006 funds they were permitted to carry over, which contributes to the low expenditure rate in 2007. Ultimately, actual spending by the provinces should be reflected in official 2007 expenditure data reported by the Ministry of Finance. U.S. government, coalition, and international agencies have identified a number of factors that challenge the Iraqi government’s efforts to fully spend its budget for capital projects. First, Treasury officials noted that violence and sectarian strife can delay capital budget execution by increasing the time and cost needed to award and monitor contracts, and by reducing the number of contractors willing to bid on projects. Second, these officials stated that recent refugee outflows and the de- Ba’athification process have reduced the number of skilled workers available and contributed to the exodus of Iraq’s professional class from the country. Third, U.S. and foreign officials also noted that weaknesses in Iraqi procurement, budgeting, and accounting procedures impede completion of capital projects. U.S., coalition, and international officials noted that violence and sectarian strife remain major obstacles to developing Iraqi government capacity, including its ability to execute budgets for capital projects. The high level of violence has contributed to a decrease in the number of workers available and can increase the amount of time needed to plan and complete capital projects. The security situation also hinders U.S. advisors’ ability to provide the ministries with assistance and monitor capital project performance. Violence and sectarian strife have reduced the pool of available talent to budget and complete capital projects and, in many cases, have increased the time needed to complete projects. International officials noted that about half of Iraqi government employees are absent from work daily; at some ministries, those who do show up only work between 2 to 3 hours per day for security reasons. U.S. and UN officials stated that, while the Ministry of Planning has a relatively skilled workforce, the security situation seriously hinders its ability to operate. These officials noted that 20 director generals (department heads or other senior officials) in the ministry have been kidnapped, murdered, or forced to leave the ministry in the 6 months prior to February 2007. Numerous U.S. and coalition officials also stated that security concerns delay the ability of advisors to provide assistance, noting that it is often too dangerous for staff to provide training or monitor contract performance. The high level of violence hinders U.S. advisors’ access to their counterparts in the ministries and directly affects the ability of ministry employees to perform their work. State and USAID efforts are affected by the U.S. embassy restrictions imposed on their movement. Embassy security rules limits, and in some cases bars, U.S. civilian advisors from visiting the ministries outside the Green Zone. For example, a former Treasury attaché noted that his team could not visit the Ministry of Finance outside the Green Zone and thus had limited contact with ministry officials. Further, USAID suspended efforts to complete the installation of the Iraqi Financial Management Information System (IFMIS) in May 2007 after five British contractors were kidnapped from the Ministry of Finance. U.S., coalition, and international agency officials have observed the relative shortage of trained budgetary, procurement, and other staff with technical skills as a factor limiting the Iraqis’ ability to plan and execute their capital spending. The security situation and the de-Ba’athification process have adversely affected available government and contractor staffing. Officials report a shortage of trained staff with budgetary experience to prepare and execute budgets and a shortage of staff with procurement expertise to solicit, award, and oversee capital projects. According to State and other U.S. government reports and officials, there has been decay for years in core functions of Iraqi’s government capacity, including both financial and human resource management. Officials also state that today’s unsafe environment has resulted in a large percentage of Iraq’s more skilled citizens leaving the country. According to a UN report, between March 2003 and June 2007, about 2.2 million Iraqis left the country, and about 2 million were internally displaced. The UN also has estimated that at least 40 percent of Iraq’s professional class has left the country since 2003. One Iraqi official complained that those leaving the country tend to be from the educated and professional classes. As a result, fewer skilled Iraqi workers outside the government are available to bid on, design, and complete proposed capital projects. Further, a 2006 Department of Defense (DOD) report stated that Iraq’s government also confronts significant challenges in staffing a nonpartisan civil service and addressing militia infiltration of key ministries. The report noted that government ministries and budgets are sources of power for political parties, which staff ministry positions with party cronies as a reward for political loyalty. Some Iraqi ministries under the authority of political parties hostile to U.S. goals use their positions to pursue partisan agendas that conflict with the goal of building a government that represents all ethnic groups. For example, until late April 2007, the Ministries of Agriculture, Health, Civil Society, Transportation, Governorate Affairs, and Tourism provided limited access to U.S. officials, as they were led by ministers loyal to Muqtada al-Sadr, who has been hostile to U.S. goals. Weak procurement, budgetary, and accounting systems are of particular concern in Iraq because these systems must balance efficient execution of capital projects while protecting against reported widespread corruption. A 2006 survey of perceptions of corruption by Transparency International ranked Iraq’s government as one of the most corrupt in the world. A World Bank report notes that corruption undermines the Iraqi governments’ ability to make effective use of current reconstruction assistance. According to a State Department document, widespread corruption undermines efforts to develop the government’s capacity by robbing it of needed resources, some of which are used to fund insurgency; by eroding popular faith in democratic institutions, perceived as run by corrupt political elites; and by spurring capital flight and reducing economic growth. U.S. and international officials have cited many weaknesses in Iraqi procurement procedures and practices. The World Bank found that Iraq’s procurement procedures and practices are not in line with generally accepted public procurement practices, such as effective bid protest mechanisms and transparency on final contract awards. Iraqi procurement laws and regulations are composed of a mixture of Saddam Hussein-era rules, CPA Order 87 requirements, and recent Iraqi government budgetary practices. The complexity of Iraq’s contracting regulations combined with the inexperience of many new Iraqi officials has led to a pervasive lack of understanding of these laws and regulations, according to State officials. The Iraqi government has sponsored conferences on budget execution in 2007 to clarify budgeting and procurement rules and procedures and issued regulations for implementing Order 87. However, laws and regulations are still complex and are frequently confusing to implement in practice, according to a Treasury official. U.S., coalition, and international officials have identified difficulties in complying with Iraqi procurement laws and regulations as a major impediment to spending Iraqi capital project budgets. For example, according to an Iraq Reconstruction Management Office (IRMO) official, the Iraqi procurement process often requires a minimum of three bids through a competitive bidding process. However, when less than three technically qualified bids are received, all bids are thrown out and the project cycles through a new round of bidding. The result has been fewer bids submitted in subsequent rounds. In addition, Iraqi procurement regulations require about a dozen signatures to approve oil and electricity contracts exceeding $10 million, which also slows the process, according to U.S. officials. Procurements over this amount must be approved by the High Contracting Commission, chaired by the Deputy Prime Minister, which causes further delays. U.S. advisors to the Ministry of Oil noted that the $10 million threshold is far too low, given the size of infrastructure projects in the energy sector. In June 2007, the Council of Ministers raised the dollar thresholds for contracts requiring High Contracting Commission approval from $10 million to $20 million for the Ministries of Defense, Electricity, Oil, and Trade, and raised the level from $5 million to $10 million for other ministries. However, the embassy noted that the increased thresholds may not improve budget execution without an increase in the number of trained personnel or technical assistance from the Ministry of Planning. Other features in the Iraqi budgetary and accounting systems adversely affect the tracking of capital projects spending. For example, government spending on reconstruction projects is not coordinated with spending for donor-financed projects, according to the World Bank. As a result, significant donor-financed expenditures are not included in the budget. In addition, according to U.S. officials, the budget is appropriated and tracked at too high a level of aggregation to allow meaningful tracking of decisions because multiple projects can be combined on a single line. Reconciliation of budget accounts is often impossible as budget execution reports are submitted late or with incomplete information. In addition, as discussed earlier, in response to IMF requirements, the government of Iraq began implementing a new budget classification and chart of accounts in 2007 that does not provide a separate breakout of capital projects spending. Appendix II provides further analysis of 2007 Ministry of Finance expenditure data and comparable 2006 data. U.S. agencies, the World Bank, and independent auditors have reported a number of serious internal control weaknesses in Iraqi government accounting procedures. U.S. officials reported that the government of Iraq uses a manual reporting system to audit expenditures, which does not provide for real-time reports. According to a Treasury official, the Iraqi government reports capital expenditures by ministry but not by specific project, which limits its ability to track capital projects expenditures. The World Bank reported that reconciliation of government of Iraq accounts is impossible because the government lacks consolidated information on the exact number of government bank accounts it has and the balances in them. The World Bank also noted that provincial governments do not provide an accounting of funds they receive. In addition, the independent public accounting firm for the Development Fund for Iraq reported numerous internal control weaknesses in Iraqi ministries for 2006, including that Iraqi ministries do not have policies and procedures manuals that detail comprehensive financial and internal controls. It also reported that the ministries do not have unified procurement policies and procedures, do not have proper project management and monitoring systems, and lack written project management policies and procedures manuals. Finally, the firm noted that the ministries’ internal audit departments do not cover the operations of state-owned companies and entities that are related to the ministries and that there are no proper monitoring procedures over the operations of the related companies. In early 2007, U.S. agencies increased the focus of their assistance efforts on improving the Iraqi government’s ability to effectively execute its budget for capital projects, although it is not clear what impact this increased focus has had, given the inconsistent expenditure data presented earlier in this report. Several new U.S. initiatives were established targeting Iraqi budget execution, including coordination between the U.S. embassy and an Iraqi task force on budget execution, and the provision of subject-matter experts to help the government track expenditures and to provide technical assistance with procurement. According to U.S. officials, these targeted efforts also reflect an increased interest of senior Iraqi officials in improving capital budget spending. In addition, improving Iraqi government budget execution is part of a broader U.S. assistance effort aimed at improving the capacity of the Iraqi government through automation of the financial management system, training, and advisors embedded with ministries. We recently reported on U.S. efforts to build the capacity of Iraqi ministries led by State, DOD, and USAID. The findings and recommendations from that report also apply to assistance efforts targeting Iraqi Budget Execution, which are a part of U.S. capacity- building efforts. Our report found that U.S. capacity building efforts in Iraq faced several challenges that posed risks to their success. The report also found that U.S. agencies implementing capacity development projects have not developed performance measures for all of their efforts, particularly outcome-related performance measures that would allow them to determine whether U.S. efforts at the civilian ministries have achieved both U.S.- and Iraqi-desired goals and objectives. The report recommended that State, in consultation with the Iraqi government, complete an overall capacity-building strategy that includes plans to address risks and performance measures based on outcome metrics. In early 2007, U.S. agencies began providing technical assistance and coordination specifically targeted to addressing Iraq’s capital budget expenditure bottlenecks. This assistance includes the following: Coordinator for Economic Transition in Iraq. State established this position in the U.S. Embassy in February 2007 to work with senior Iraqi government officials and to coordinate with U.S. agencies that provide assistance related to improving Iraq’s budget execution. The coordinator participated in regular meetings of an Iraqi government budget execution task force, which includes the Deputy Prime Minister and the Ministers of Finance and Planning, to address impediments to spending the government’s capital projects budget. The coordinator also worked with the U.S. Treasury attaché to help the Iraqi government hold conferences on budget execution to educate government officials about Iraqi budget and procurement processes. According to a State official, the responsibilities of this position were transferred to Ambassador Charles Reis in June 2007, when he assumed responsibility for coordinating all economic and assistance operations in the Embassy. Budget Execution Monitoring Unit. The Iraq Reconstruction Management Office supported the creation of this unit in the Deputy Prime Minister’s office with four to six subject-matter experts to help assess budget execution by collecting and aggregating spending data for key ministries and provinces. The unit was established in spring 2007. State officials noted that the Budget Execution Monitoring Unit tracks aggregate spending by ministry. Although some ministries have demonstrated some capability to track what projects are undertaken with capital projects spending, the U.S. Embassy does not have a mechanism to track the results of capital projects spending across all ministries and provinces, according to State officials. Embassy officials and the Treasury Attaché are analyzing how to help Iraqi ministries and provinces develop project tracking tools. Procurement Assistance Program. State and DOD provided funding for 10 to 12 international subject-matter experts to support this program within the Ministry of Planning and Development Cooperation to help ministries and provinces with procurement and capital budget execution, including policy interpretation, training, acquisition consulting, and technical assistance. The program was established in May 2007 and consists of two Procurement Assistance Centers in Baghdad and one in Erbil. The program is currently setting up 18 Provincial Procurement Assistance Teams that will provide direct assistance to provincial officials. In addition to these recently established efforts, which are specifically designed to help the Iraqi government execute its capital projects budget, several broader, ongoing capacity-building assistance efforts are under way. These efforts include assistance related to budget execution, although that is not their primary focus. These efforts include the following: Iraqi Financial Management Information System (IFMIS). USAID and IRMO awarded contracts beginning in 2003 to BearingPoint to install a Web-based financial management information system for the government of Iraq to support a fully automated and integrated budget planning and expenditure reporting and tracking system at the central and provincial levels. The IMF has highlighted the importance of implementing IFMIS in its reviews of Iraq’s progress toward meeting the terms of the IMF Standby Agreement. However, according to USAID officials, the project experienced significant delays, due in part to a lack of full support by Iraqi government officials. USAID suspended assistance to the Ministry of Finance to implement IFMIS in June 2007, following the kidnapping of five BearingPoint employees. A USAID official stated that the Iraqi government continues to rely on its legacy manual accounting system, which contributes to delays in the government’s reporting of expenditure data. In lieu of IFMIS, Treasury and USAID are now assisting provincial governments with the use of inexpensive spreadsheet software to improve their financial management capability, according to Treasury officials. National Capacity Development Program. This USAID program is to improve public administration skills at the ministerial level and has provided training in a range of issues, including project management, budgeting, fiscal management, leadership, and information technology. As of October 2007, the program had trained more than 2,000 ministry officials from 30 different Iraqi institutions since November 2006 when the training program was first established, according to USAID officials. The program has trained 500 Iraqi government officials specifically in procurement, and 51 of the 500 completed the “Train the Trainers” course to enable them to train additional ministry officials in procurement. USAID officials also noted that the program includes imbedded teams at the Ministry of Planning and Development Cooperation that are helping ministries set up proper procurement units with procurement tracking systems. Local Governance Program. This program, operated under a USAID contract, supports Iraq’s efforts to improve the management and administration of local, municipal, and provisional governments. According to USAID officials, this program includes several activities to assist with provincial budget execution. These efforts include the development of reference materials refreshed annually to reflect new guidance by the Minister of Finance and Minister of Planning and Development Cooperation; national and regional conferences that bring together provincial council members, governors, and members of their staffs to discuss provincial budget execution; and local staff in many provincial government centers assisting with provincial budget execution, as well as public finance advisors in some of the PRTs and at its Baghdad headquarters. Advisors to civilian and security ministries. These advisors assist in the development of the ministries’ budget planning and contracting skills. As of mid-2007, State and USAID were providing 169 advisors and subject- matter experts to civilian ministries to implement capacity development projects and provide policy advice and technical assistance at key ministries and government entities. In addition, DOD and Multinational Security Transition Command-Iraq (MNSTC-I) provided 215 embedded U.S. and coalition military and civilian advisors to the Ministries of Interior and Defense; and a U.S. Treasury attaché advises the Ministry of Finance. Treasury reports that it has completed four multiday budget execution workshops in 2007 that have trained over 120 central ministry and provincial government staff members, and it plans additional training for January 2008. The administration’s September 2007 Benchmark Assessment Report concluded that the government of Iraq has steadily improved its ability to execute capital projects spending, as a result of U.S. assistance efforts. However, it is not clear what impact these efforts have had to date due to limitations with available data, outlined earlier in this report, and because much of this assistance specifically designed to improve budget execution was established too recently for U.S. agencies to fully evaluate them. To support continued economic growth and improve the delivery of services, the government of Iraq needs to make significant investment in its infrastructure. Such investments come from the government’s efficient execution of its capital budget. The administration’s September 2007 Benchmark Assessment Report concluded that the government of Iraq had steadily improved its ability to execute capital projects spending as a result of U.S. technical assistance. However, the additional information provided by Treasury and State in commenting on a draft of this report did not reassure GAO that accurate and reliable data on Iraq’s budget exists. State and Treasury continue to cite unofficial expenditure data to support assertions that the government of Iraq is becoming more effective in spending its capital project budget, even though the data differ significantly from the official expenditure figures generated under a new IMF-compliant chart of accounts. The discrepancies between the unofficial and official data highlight the ambiguities about the extent to which the government of Iraq is spending its resources on capital projects. Thus, we do not believe these data should be used to draw firm conclusions about whether the Iraqi government is making progress in executing its capital projects budget. The lack of consistent and timely expenditure data limits transparency over Iraq’s execution of its multibillion dollar 2007 capital budget and makes it difficult to assess the impact of U.S. assistance. To help ensure more accurate reporting of the government of Iraq’s spending of its capital projects budget, we recommend that the Secretary of Treasury work with the government of Iraq and relevant U.S. agencies to enhance the Treasury department’s ability to report accurate and reliable expenditure data from the ministries and provinces. This reporting should be based on the IMF-compliant standards rather than unofficial data sources that are of questionable accuracy and reliability. We provided a draft of this report to Departments of Defense, State, the Treasury, and USAID. Treasury and State provided written comments, which are reprinted in appendixes IV and V. Treasury, State, and USAID also provided technical comments and suggested wording changes that we incorporated as appropriate. DOD did not comment on the report. Treasury and State raised several concerns in commenting on our draft report. First, Treasury stated that our analysis of Iraq’s budget execution was based on incomplete and unofficial reporting, in particular, the unofficial July 15, 2007 data used to comply with a congressional reporting requirement (the September 2007 Iraq Benchmark Assessment Report). The administration highlighted this unofficial data in the September 2007 benchmark report to Congress to assert that Iraq’s central government and provinces were becoming more effective at spending their capital budgets. We do not believe these data should be used to draw firm conclusions about the Iraqi government’s progress in spending its capital budget. We agree that the unofficial data the administration used in the report to Congress do not portray a full and accurate picture of the situation. Accordingly, we compared these data with official Iraqi Ministry of Finance data to assess the extent to which the Iraqis had spent their capital projects budget. Since the spending gap between the administration’s unofficial data and the Ministry of Finance’s official data is strikingly large, we recommend that the Department of Treasury work with the Ministry of Finance to reconcile these differences. Second, Treasury stated that our report incorrectly concluded that capital spending is only contained in the Iraqi budget item for “nonfinancial assets” (which we refer to as “investment”). State made a similar comment. Treasury and State asserted that capital spending is spread through many chapters in the new chart of accounts and that the amount is higher than the 4.4 percent cited in our report. However, State and Treasury did not provide us with evidence to demonstrate which Iraqi accounts included additional capital expenditures. Third, Treasury questioned our comparison of 2006 and 2007 Iraqi spending as displayed in appendix II. Treasury stated that it is misleading to compare 2006 and 2007 spending levels because of changes in Iraqi spending accounts between the 2 years. We added this note of caution to appendix II. However, we also made adjustments to account for the differences in the budget classification systems, thereby enabling valid comparisons between 2006 and 2007 data. Table 4 outlines the key differences in the 2006 and 2007 classification systems. After updating the draft report with data through August 2007, budget execution ratios in 2007 are still lower in most cases than corresponding ratios in 2006. We believe this analysis provides additional perspective on the comparisons between 2006 and 2007 expenditures made in the administration’s September 2007 Benchmark Assessment Report to Congress. State also raised several concerns in commenting on a draft of this report. First, State commented that the draft report fails to accurately portray the “tangible progress” that the central government and provincial governments have made in budget execution. State commented that this progress represents a tangible example of Iraq’s leaders working together successfully. However, we do not believe these data are sufficiently reliable to conclude that U.S. assistance efforts have already achieved success helping the Iraqi government execute its capital budget. Second, State attributed the discrepancy between the official and unofficial data cited in our report to “a time lag in data collection” and asserted that the July 15 data are representative of the government of Iraq’s performance as of the publication of the administration’s September 2007 Benchmark Assessment Report. After providing our draft report to State for comment, we received updated data from Treasury that clearly refutes State’s comment. These updated data show that the central ministries spent 4.4 percent of their investment budget through August 2007, raising questions about the unofficial data reported to Congress in the administration’s September 2007 benchmark report. Finally, State commented that the amount of money committed and dispersed in the provinces during 2007 is “especially impressive.” However, as we noted in our report, commitments do not represent expenditures. The absence of provincial spending data in official Ministry of Finance reporting makes it difficult to determine the extent to which the Iraqi government was spending its 2007 capital projects funds. As Treasury noted in their comments, the official Ministry of Finance reporting does not show provincial spending and is working to determine the discrepancy. We are sending copies of this report to interested congressional committees. We will also make copies available to others on request. In addition, this report is available on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-8979 or christoffj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. In this report, we review the Iraq government’s progress in expending its fiscal year 2007 capital projects budgets. Specifically, we (1) examine the data the U.S. embassy uses to determine the extent to which the Iraqi government has spent its 2007 capital projects budget, (2) identify factors affecting the Iraqi government’s ability to spend these funds, and (3) describe U.S. government efforts to assist the Iraqi government in spending its capital projects funds. We undertook this review under the Comptroller General’s authority to conduct reviews on his own initiative, and in recognition of broad congressional interest in Iraq and the critical importance of Iraqi capital expenditures in rebuilding its infrastructure. To examine the data the U.S. Embassy in Baghdad uses to measure Iraqi government spending, we obtained expenditure data from the U.S. Department of the Treasury and the U.S. Embassy in Baghdad and interviewed knowledgeable U.S. agency officials. We reviewed three different sets of data on Iraqi government expenditures: official monthly expenditure reports from the Ministry of Finance through August 2007; unofficial Ministry of Finance data on expenditures by central ministries, as of July 15, 2007; and U.S. Provincial Reconstruction Team (PRT) data on expenditures by Iraq’s provincial governments, as of October 21, 2007. The official Ministry of Finance expenditure reports reflected a much lower rate of spending on capital projects than the unofficial Ministry of Finance and unofficial PRT data showed. We did not independently verify the precision of the data on Iraq’s budget execution. However, the disparity between the different sets of data calls into question their reliability and whether they can be used to draw firm conclusions about the extent to which the Iraqi government has increased its spending on capital projects in 2007, compared with 2006. We are presenting the PRT data in appendix III for informational purposes, even though our field work raised questions about their reliability. To obtain a better understanding of Iraqi capital budget and spending data and Iraqi practices in developing expenditure data, we interviewed U.S. officials with the Departments of Defense (DOD), State (State), and the Treasury (Treasury); and the U.S. Agency for International Development (USAID) in Washington, D.C., and Baghdad. We also interviewed advisors to the Ministry of Finance, working under a contract with the United Kingdom’s Department for International Development. We also reviewed U.S. embassy reports on Iraqi budget execution, Iraqi government instructions for executing the budget, Iraq’s Financial Management Law, the Special Inspector General for Iraq Reconstruction (SIGIR)’s July 2007 Quarterly and Semiannual Report to the Congress, and the administration’s July and September 2007 Benchmark Assessment Reports. To examine the factors affecting the Iraqi government’s ability to spend its capital projects budget, we reviewed and analyzed reports and interviewed officials from DOD, State, Treasury, and USAID. We also interviewed advisors to the Ministry of Finance, working under a contract with the United Kingdom’s Department for International Development. We interviewed these officials in Iraq over the telephone and visited Iraq in July 2007. We also reviewed Iraq’s Financial Management Law and relevant reports from the World Bank, the International Monetary Fund (IMF), Ernst and Young, and SIGIR. In addition, we reviewed previous GAO reports. We reviewed information provided in these interviews and reports to identify the different factors affecting Iraq’s ability to spend its capital projects budget. To examine U.S. government efforts to assist the Iraqi government in executing its capital projects budget, we interviewed officials from DOD, State, Treasury, and USAID. We reviewed several U.S. government documents, including State’s April 2007 quarterly section 1227 report to Congress on the military, diplomatic, political, and economic measures undertaken to complete the mission in Iraq; DOD’s quarterly reports to Congress, Measuring Security and Stability and Iraq, from November 2006 to September 2007; the USAID contract awarded in July 2006 to Management Systems International, Inc., Building Recovery and Reform through Democratic Governance National Capacity Development Program; a status report on USAID’s implementation of the Iraqi Financial Management Information System under the Economic Governance Project II, and reports from USAID’s Iraq Local Governance Program. We conducted this performance audit from April 2007 through December 2007 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Budget execution is a relative measure comparing actual expenditures to the budget. Using the budget execution metric, the government spent 36 percent of its budget during the first 8 months of 2007, compared to 43 percent during the same period in 2006. Tables 2 and 3 provide a breakdown of budget execution for the first 8 months of 2006 and 2007, respectively. While we were not able to determine the reliability of these official Iraqi expenditure data, we are presenting this analysis because it raises additional questions about the data presented by the administration in its September 2007 assessment of Iraqi benchmarks. Whereas the unofficial Iraqi expenditure data cited by the administration suggest that the Iraqi government has improved its ability to spend capital projects funds in 2007, this analysis of official Iraqi expenditure reports suggests the opposite. To compare the 2006 and 2007 budgets, we combined various expenditure categories into four groups. As explained in the report, beginning in 2007, the Iraqi government adopted a new chart of accounts as recommended by the IMF, which complicates efforts to compare 2006 with 2007. Column 1 in table 4 shows the nine categories of expenditures reported in 2006 and their combination into the four groups presented in the table; column 2 shows the eight categories used in the 2007 chart of accounts. The percentage of the budget expended as of August 31 is the ratio used to measure budget execution for 2006 and 2007 in tables 1 and 2, respectively. Only in the category of goods and services did budget execution appear to increase, from 22 percent in 2006 to 27 percent in 2007. In general, the Iraqi ministries have compensated their employees. However, even in that category, budget execution decreased from 65 percent in 2006 to 53 percent in 2007. As mentioned previously, the expenditures for capital projects are no longer reported as a separate category in official Ministry of Finance accounts. The 2007 budget provides separate categories for capital goods and capital projects. However, the reported expenditures in the category nonfinancial assets, which we refer to as investment, combine capital goods and capital projects. The capital projects budget of $10.1 billion represents 88 percent of the combined investment category of $11.4 billion. The budget execution ratio of this investment category was 5 percent for the first 8 months of 2007, compared with 13 percent for the first 8 months of 2006. The expenditure performance of the Iraqi government from January through August 2007 may even be worse than the dollar expenditure figures suggest. The Ministry of Finance reports the government’s budget and expenditures in its own currency Iraq dinars (ID). The U.S. Treasury converts them to dollars using a budget exchange rate of ID 1,500 per dollar in 2006 and ID 1,260 per dollar in 2007. The dollar value of expenditures from January through August 2007 is 19 percent higher due to the exchange rate conversion. Expenditures from January through August 2007 were ID 18,600 billion or about 15 percent lower than the ID 21,900 billion spent over the same period in 2006. Table 5 and table 6 provides additional details on provincial capital projects budgets, by allocations, committed funds, and spent funds, for 2006 and 2007, as of October 21, 2007. These funding levels are based on data collected and reported by U.S.-led PRTs. Because the government of Iraq only reports provincial spending in the aggregate, the embassy relies on PRT data to track provincial capital projects spending. We are presenting the PRT data for informational purposes, even though our field work raised questions about their reliability. In addition, Steve Lord, Acting Director; Lynn Cothern; Howard Cott; Martin De Alteriis; Timothy Fairbanks; Victoria H. Lin; Bruce Kutnick; Mary Moutsos, and Sidney Schwartz made key contributions to this report.
The President's New Way Forward in Iraq identified Iraq's inability to spend its resources to rebuild infrastructure and deliver essential services as a critical economic challenge to Iraq's self-reliance. Further, Iraq's ability to spend its $10.1 billion capital projects budget in 2007 was one of the 18 benchmarks used to assess U.S. progress in stabilizing and rebuilding Iraq. This report (1) examines data the U.S. embassy used to determine the extent to which the government of Iraq spent its 2007 capital projects budget, (2) identifies factors affecting the Iraqi government's ability to spend these funds, and (3) describes U.S. government efforts to assist the Iraqi government in spending its capital projects funds. For this effort, GAO reviewed Iraqi government budget data and information on provincial spending collected by the U.S. Provincial Reconstruction Teams. GAO also interviewed officials from the departments of the Treasury, Defense, State, and other agencies and organizations. U.S. and Iraq reports show widely disparate rates for Iraqi government spending on capital projects. Accordingly, GAO cannot determine the extent to which the Iraqi government is spending its 2007 capital projects budget. In its September 2007 Iraqi benchmark assessment, the administration reported that Iraq's central government ministries had spent 24 percent of their 2007 capital projects budget, as of July 15, 2007. However, this report is not consistent with Iraq's official expenditure reports, which show that the central ministries had spent only 4.4 percent of their investment budget as of August 2007. The discrepancies between the official and unofficial data highlight uncertainties about the sources and use of Iraq's expenditure data. The government of Iraq faces many challenges that limit its ability to spend its capital project budget. Violence and sectarian strife delay capital budget execution by increasing the time and cost needed to implement contracts. Recent refugee flows and the de-Ba'athification process have contributed to the exodus of skilled labor from Iraq. In addition, U.S. and foreign officials also noted that weaknesses in Iraqi procurement, budgeting, and accounting procedures impede completion of capital projects. For example, according to the State Department, Iraq's Contracting Committee requires about a dozen signatures to approve projects exceeding $10 million, which slows the process. U.S. agencies have undertaken a variety of programs to help Iraq execute its capital projects budget, although it is not clear what impact these efforts have had to date. U.S. agencies supported new efforts in 2007 targeting Iraq's ability to spend capital budget funds, including an office to provide procurement assistance to ministries and provinces and a new position in the U.S. Embassy to coordinate with senior Iraqi government officials on budget execution and oversee related U.S. assistance efforts. In addition, improving Iraqi government budget execution is part of a broader U.S. assistance effort to improve the capacity of the Iraqi government. For example, the U.S. Agency for International Development (USAID) has trained 500 ministry officials in procurement or budget execution. USAID also led an effort to implement an automated financial management information system for the Iraqi government, although this program was suspended in June 2007 following the kidnapping of five contractors involved in the project. In addition, U.S. advisors work directly with key Iraqi ministries to assist with budget execution and procurement, among other responsibilities.
The Foreign Buildings Act of 1926, as amended, authorizes the secretary of state to sell, exchange, or lease any property acquired abroad that is used for diplomatic and consular establishments in foreign countries. The law authorizes the secretary to use the sales proceeds to acquire and maintain other property overseas. It also requires the secretary to report such transactions to the Congress with the department’s annual budget estimates. The secretary of state delegated the secretary’s authority under the law to the Bureau of Overseas Buildings Operations (OBO). Thus, OBO is responsible for establishing and overseeing policies and procedures for the department’s real estate properties. In 1996, we reported that the State Department did not have a systematic process for identifying unneeded properties and disposing of them. At that time, the department identified potentially unneeded properties through a variety of ad hoc and uncoordinated actions that we believed did not constitute an organized and effective system for identifying such properties. We also reported that decisions about the sale of unneeded overseas real estate properties had been delayed for years because of disputes between OBO and the regional bureaus and embassies. To speed these decisions by providing a final, authoritative forum for the disputing parties to argue their positions, we recommended that the State Department establish an independent panel to review disputed properties and decide which ones should be sold. In September 1996, the Congress directed the secretary of state to establish an advisory board on real property management to (1) review information about properties proposed for sale and (2) compile a list of properties recommended for sale to be approved by the under secretary of state for management. The Congress also directed the State Department to transmit this list to the appropriate congressional committees and to “proceed with the immediate sale of on the approved list” as soon as market conditions were appropriate. In response to the congressional direction, in April 1997, the assistant secretary of state for administration created the Real Property Advisory Board to review and make recommendations about the sale of disputed properties. The advisory board’s charter authorizes a seven-member panel appointed by the under secretary of state for management consisting of three real estate professionals from outside the State Department and four high-ranking department officials. The board is authorized to (1) review information on properties proposed for sale by the State Department, the State IG, our office, or any other federal agency and (2) compile a list of properties recommended for sale to be approved by the under secretary of state for management. The charter directs the advisory board to meet at least once each fiscal year and to proceed “as far as possible” by consensus in deciding which properties to recommend for sale. A 1999 State IG’s report found that the State Department had substantially complied with the Congress’s intent (and our 1996 recommendation) in drafting the advisory board’s charter and reporting the board’s actions to the Congress. The report also found that the advisory board had functioned in a manner consistent with its charter, and that its recommendations were based on sufficient and balanced information. Since 1996, OBO has taken steps to implement a more systematic process for identifying unneeded properties, which has resulted in post and OBO officials’ placing greater emphasis on identifying properties that could be sold. Steps reflecting this emphasis include an annual request to posts asking them to identify government-owned properties that should be considered for disposal and increased efforts by OBO and IG officials to identify such properties when they visit posts. However, the department’s ability to monitor property use and identify potentially unneeded properties is hampered by weaknesses in its property inventory system. In response to our 1996 report, OBO began asking posts during an annual property inventory to identify properties that should be considered for disposal. OBO has included this request as part of State’s annual chiefs of mission certification that posts are in compliance with the Foreign Affairs Manual in regards to the management of real property. This process has helped the State Department to more systematically identify unneeded property. For example, in 2001, OBO cabled all posts for this purpose in July and sent follow-up cables to unresponsive posts in August. OBO’s initial cable requested that posts report all government-owned real property that should be considered for disposal, including properties for which posts had disposal processes under way. Posts were instructed to include excess office space, excess and oversized/overstandard housing, vacant or underutilized lots, properties used infrequently or for purposes such as unofficial business, and any other properties that could be considered appropriate for disposal. OBO officials explained that the effectiveness of this identification effort depended on posts’ responding fully and promptly. In 2001, almost all posts complied. As a result of this process, the department identified 130 potentially unneeded properties. In addition to the annual post certification process, the director of OBO has instructed bureau officials to emphasize identification of unneeded property. For example, OBO officers have been instructed to pay more attention to identifying potentially disposable property during post visits to oversee and resolve real estate issues. OBO officials said this increased emphasis has helped posts and OBO to continually focus on the need to dispose of unneeded property. The State Department’s IG reviews property use issues as part of its regular inspections. In addition, in February 1998, the under secretary for management asked the IG to specifically include identification of excess, underutilized, and obsolete properties as part of the IG’s inspections and audits at overseas posts and to provide periodic summaries on these data collection efforts. This work was aimed at identifying potentially excess, underutilized, and obsolete properties on the basis of existing criteria and was not a substantive review of the reasons why posts should or should not retain these properties. It ended in June 2001 by mutual consent between the IG and the under secretary for management, but the IG still reviews property status and use as part of its post inspections. The IG’s final report stated that the office found 21 excess, 160 underutilized, and 51 obsolete properties during this 3-year review. The State Department agreed to sell 72 of these properties. The IG stated that these reviews were useful and productive. It added that chiefs of mission and other senior officials were interested in this work, and, as a result, the IG noted increased emphasis on real property management. An IG official said they would only start a similar effort again if it is requested by the under secretary. According to this official, OBO’s new director has taken a more aggressive approach to identifying and selling unneeded property, which reduces the need for any additional IG effort at this time. The State Department’s worldwide real property inventory contains many errors and omissions. To better monitor property use and identify potentially unneeded properties, accurate inventory data are needed. Accurate real property data are also needed for the worldwide inventory that the General Services Administration keeps at the Congress’s request. OBO, however, has had difficulty getting posts to ensure that data in its inventory database are accurate, which is a long-standing problem. We observed problems involving properties sold but not removed from the inventory, properties acquired but not added to the inventory, and errors in cost and other descriptive information. For example, In June 2001, the inventory still listed an office building and the consul general’s residence in Alexandria, Egypt, which were sold in 1997 and 1998 (for more than $5 million). Acquisition cost was overstated by about $300 million for three properties in Bamako, Mali, and by nearly $132 million for one property in Yaounde, Cameroon, due to data input errors, according to OBO. Inaccurate inventory information can result in unneeded properties not being identified for potential sale. For example, a parking lot in Paris purchased in 1948 was not included in the inventory until an IG visit in 1998 highlighted the lot’s absence from the inventory. The property is currently being marketed and is valued at up to $10 million. We also found that the number of properties listed in the inventory does not accurately reflect the number of properties the State Department manages because, according to OBO, posts have inconsistently assigned property identification numbers. Posts sometimes assigned separate numbers to land and associated buildings. For example, the embassy in Paris is listed as three separate properties—the land and two buildings. The buildings were acquired separately but are now connected. The three properties comprise one compound. At other times, posts assigned one number to multiple properties—for example, in Brasilia, four separate lots were given one property identification number. Along with its other efforts, OBO is attempting to improve the accuracy, and therefore the reliability, of the State Department’s worldwide overseas property inventory data. According to OBO officials, since individual posts are responsible for entering their own data, correcting inaccuracies requires that they routinely check and update data in their property inventories. To help posts keep accurate inventory data, OBO has provided 238 posts with computer software for recording their property inventories, along with a user manual that gives step-by-step instructions. However, according to OBO, 185 posts have installed or are in the process of installing the software, leaving 53 posts that are not using it—thereby negatively affecting the consistency and accuracy of inventory data. In November 2001, OBO reported that about 20 posts had not corrected known errors or omissions in their property inventories. Because of such errors and omissions, some OBO staff said they do not rely on the property inventory for their work and instead keep their own property inventory information. The State Department’s performance in selling unneeded property has significantly improved in the last 5 years. Property sales proceeds were more than 3 times greater than for the previous 5-year period. However, despite this progress, the department still has a large number of potentially unneeded properties that remain unsold. In 2001, the State Department began several initiatives intended to expedite the sale of unneeded properties, including (1) using “business case” analyses to ensure that financial and economic factors were included in the property sales decision process, (2) emphasizing the use of commercial real estate marketing services, and (3) more aggressively focusing on resolving property disputes. The State Department sold 104 properties for more than $404 million from fiscal years 1997 through 2001. This is a threefold increase in proceeds compared with the 65 properties the department sold for more than $133 million from fiscal years 1992 through 1996 (see fig. 1). Large-value sales from fiscal years 1997 through 2001 included a compound in Seoul, South Korea (almost $99 million in installment payments), and the former chancery in Singapore for nearly $60 million. As of September 30, 2001, the State Department reported that 92 properties were potentially available for sale. These properties have an estimated value of more than $180 million. Many of these properties have been identified for potential sale for years, including 35 that date back to 1997. In 2001, the new OBO director introduced “business case” sales analysis to the process of determining whether a property should be sold. This new framework considers economic and financial factors, along with diplomatic and security issues and post concerns. According to OBO officials, the State Department’s former property sales decision-making process generally did not fully consider economic and financial factors. OBO officials said the new framework has helped OBO in its effort to gain agency consensus regarding property sales and is already producing results. OBO officials also stated that the director has made business case- based decisions to sell properties in at least six posts, including Paris where he has directed the post to sell a parking lot and an office building. Another initiative designed to expedite property sales is OBO’s award of indefinite quantity contracts to several international real estate brokerage firms for real estate marketing services. OBO officials believe these contracts will speed overseas property sales, give OBO greater control over the sales process, and relieve the administrative burden that property sales place on posts. Under these contracts, the brokerage firms will do tasks formerly performed by the posts, including advertising properties, identifying prospective buyers, receiving bids, and conducting negotiations. However, the brokers cannot conclude sales without the State Department’s approval. As of March 2002, OBO was using these contracts to market 20 properties at 10 posts, including 5 properties OBO has been trying to sell for several years. OBO stated that it has not been able to fully evaluate the effectiveness of these contracts since the program has just started. Furthermore, to reduce the department’s inventory of unneeded properties, the new OBO director has focused on resolving disputes with host countries and posts that have delayed the sale of valuable properties. For example, OBO intends to sell a high-value Bangkok residential compound that has been under consideration since the early 1990s but delayed due to post objections. The Asian financial crisis in 1997 temporarily halted this debate, according to OBO officials, but OBO is now pushing to sell the property. OBO officials added that the director has also addressed disputed properties at five more posts. The State Department has not yet sold 19 of 26 properties recommended for sale by the Real Property Advisory Board and approved by department management. Since its inception in 1997, the advisory board has reviewed 41 disputed properties and recommended that 27 be sold (department management approved the sale of 26 of these properties). As of April 2002, the State Department had disposed of 7 (including 2 for which it terminated the long-term lease) of the 26 properties for about $21 million. Sales of the remaining 19 properties, valued at about $70 million, have been delayed by host country restrictions (12 properties), the need to find replacement properties (4 properties), and post objections (3 properties). OBO officials acknowledged that the department has moved slowly to resolve some of these impediments. As a result, the advisory board has reviewed the status of most properties multiple times over several years. Our analysis of department records shows that of the 41 disputed properties reviewed since 1997, the advisory board recommended selling 27 (26 were approved for sale by State Department management) and retaining 9. The board planned to revisit the cases of 4 properties at a later date and ended its review of 1 property in Manila after concluding that the issue at hand was largely political and diplomatic. The advisory board reached these decisions and compiled its list of recommended sales by consensus. Our analysis of department records and discussion with a board member showed that in reaching these decisions, the advisory board’s consideration of economic analyses was balanced by consideration of political and diplomatic factors, such as representational concerns and the historic value of the properties. Figure 2 summarizes the board’s recommendations for all 41 properties through its mid- November 2001 meeting. The assistant secretary of state for administration or under secretary of state for management reviewed and approved 26 of the board’s 27 sales recommendations. Our analysis of department records shows that the State Department has disposed of 7 of these 26 properties for $20.6 million. According to OBO, the estimated value of the 19 unsold properties is about $70 million. In addition, the State Department has decided to sell the property in Manila that the board had considered but on which it had declined to make a recommendation. Table 1 summarizes the factors that have delayed the sale of these properties. Appendix I provides additional information about the disposition and status of all 41 properties reviewed by the advisory board. Because property sales were delayed, the advisory board reviewed the status of most properties submitted in 1997 through 2000 multiple times over several years. Our analysis of department records shows that, on average, the board reviewed 34 properties 4 times over 3.1 years. In addition, as of the board’s last meeting in November 2001, 17 of the properties had been sold, retained for use, or otherwise discharged by the board. The board had reviewed each of the remaining 17 properties (awaiting sale) an average of almost 6 times over 4.4 years. Table 2 shows the results of our analysis. OBO officials predict that the State Department will implement advisory board recommendations more quickly in the future. According to these officials, recent actions to expedite property sales, such as contracting for real estate appraisal and marketing services, will reduce delays in implementing the advisory board’s recommendations by making approved property sales less susceptible to post appeals and inaction. Moreover, OBO believes that its enhanced standing within the department will reduce delays by giving OBO a greater voice in intradepartmental discussions to counterbalance post appeals. In 2001, OBO was upgraded from an office reporting to the assistant secretary of state for administration to a bureau reporting to the under secretary of state for management. OBO officials also said the advisory board’s support for OBO’s position on most properties was a positive factor in helping to reduce post resistance to proposed sales. OBO has implemented a number of initiatives to improve the identification of unneeded properties. Accurate property inventory data would help OBO and the posts to further identify such properties. However, inventory data are currently inaccurate and therefore unreliable, and post cooperation in correcting these errors and omissions has been inconsistent. While OBO has taken action to expedite property sales, difficulties reaching consensus within the State Department on sales of individual properties continue to cause delays. Furthermore, the State Department has not fully implemented most of the Real Property Advisory Board’s recommendations, and properties valued at about $70 million have not been sold. Additional property sales could be delayed unless the department takes action to ensure that approved sales recommendations are implemented as the Congress intended—as soon as market conditions are appropriate and any issues with the host country are resolved. To improve the State Department’s ability to identify properties that may be available for sale, we recommend that the secretary of state take action to improve the accuracy of the real property inventory. Ensuring that all posts install and use the new automated property inventory software would be a key step. In written comments on a draft of this report, the State Department stated that it is in total agreement with our recommendation and is taking steps to implement it. The department added that it believes this report is a fair and accurate representation of its ongoing efforts to dispose of unneeded real property overseas, and that the report recognizes the progress and the many improvements that have been made and continue to be made. The department also stated that the cooperative effort between the legislative and executive branches on this review can serve as a model for future work. In a draft of this report, we had recommended that the secretary of state direct the department to proceed with property sales as soon as market conditions are appropriate to ensure that disputed overseas real estate properties are sold as expeditiously as possible. The department responded that it believed the recommendation is unnecessary due to the enhanced position of the OBO Bureau and the proactive approach and involvement of its director in property disposal issues. It added that it appreciated the intent of the recommendation, that the secretary use his office as necessary and appropriate to expedite disposal of unneeded property, and that this option is always available should it become necessary. On the basis of these comments, we deleted this recommendation from the report. However, as the department noted in its comments, instances may arise when involvement by the secretary does become necessary, specifically to emphasize resolution of issues caused by host country restrictions on property sales that require diplomatic negotiations, such as the case with the 12 properties in Brasilia. It is therefore important that the director of OBO keep the under secretary for management informed on the status of all properties being considered for sale to avoid the type of lengthy delays experienced in the past. To determine if the State Department has taken steps to improve its process for identifying unneeded properties that are potentially available for disposal, we interviewed OBO’s director and other OBO officials concerning OBO policies and processes for identifying unneeded property and determining when properties should be sold. We reviewed documents relating to OBO’s identification of unneeded property potentially available for disposal, including the State Department’s quarterly reports to the Congress describing properties potentially available for disposal during that quarter. We also examined OBO’s policies and processes for entering information into its real property worldwide database and issues affecting quality control over this information, and we reviewed the department’s worldwide property inventory as part of our effort to assess the accuracy of the property database. In addition, we reviewed sections of the Foreign Affairs Manual applicable to property management overseas and documents prepared by State Department officials in response to our questions about their processes for identifying unneeded property. To assess the Department of State’s performance in selling unneeded properties, we analyzed quarterly reports to the Congress identifying property sales since 1997 and properties that are still available for disposal. We also reviewed OBO policies and processes, focusing on actions OBO has taken to overcome constraints that have delayed sales, such as disputes with posts and host government restrictions. We also interviewed officials at OBO’s Real Estate and Property Management and Area Management offices to identify the status of properties being considered for sale and to understand how they deal with the posts concerning individual property sales. In addition, we reviewed the department’s long-range overseas buildings plan to identify property the department plans to sell through fiscal year 2007. To determine whether the State Department has implemented the Real Property Advisory Board’s recommendations, we analyzed the House conference report that directed the department to establish the board, our prior and State IG reports, and applicable department policies and guidance in the Foreign Affairs Manual and Foreign Affairs Handbook. We analyzed records prepared by State Department officials in response to our questions about the advisory board, minutes of the board’s eight meetings, and the board’s original and modified charters. We also interviewed a member of the advisory board and State Department officials involved in reviewing the properties included in our evaluation. In addition, we analyzed the minutes of the advisory board’s meetings and other records to determine the number of properties submitted to the board for review from 1997 through 2001, the board’s recommendations for these properties (sell, retain, revisit, or other), and the current status of these properties (sold, retained, or awaiting sale). For properties submitted to the advisory board from 1997 through 2000, we analyzed these records to determine the number of times and the length of time the board reviewed each of these properties. This analysis excluded seven properties submitted to the advisory board at its mid-November 2001 meeting because we do not know yet whether State will implement the board’s sales recommendations before its next meeting. We conducted our review from June 2001 through April 2002 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will send copies of this report to interested congressional committees and the secretary of state. We also will make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4128 or at fordj@gao.gov. Contacts and staff acknowledgments are listed in appendix III. Property status as of March 2002 Sold in 1998 for $2.1 million. Sold in 2000 for $1.05 million. Post has not acted to sell property. Retain until post finds a secure residence for defense attaché. Sold in 2000 for $4.7 million. Sales delayed by tax dispute with host government. Negotiations started recently to resolve the dispute. IG reports property no longer underutilized. Retained to provide security buffer. Retained for recreational use. The State Department plans to sell the property when the school relocates. In the interim, the department will charge the school rent. Board recommended selling properties after post occupied new embassy (which occurred in October 2001). The State Department has terminated these long-term leases effective April 23, 2002. Sold in 1999 for $12.5 million. Retained to provide security buffer. Sold in 1998 for $239,082. Retained as site for new office building after security concerns made embassy site unsuitable. State plans to sell when it can buy or lease suitable replacements. State is negotiating the disposition of these properties with tenant agencies. Board concluded the decision was political/diplomatic. State has decided to sell the property. Property status as of March 2002 Relocation not cost-effective. Retained for parking. Property was retained on the basis of guidance from the president. Property retained for recreational use. Board recommended selling the property unless State’s 2002 long- range facilities plan authorized constructing a new office building in Rabat. State now plans to sell the property contingent on resolving potential host government restrictions. Property retained for recreational use. The following are GAO’s comments on the Department of State’s letter dated May 15, 2002. 1. We deleted this example from the final report. 2. The State Department’s property inventory records from March 1998,did not include the parking lot and listed the ambassador’s residence as 3.01 acres, not 3.4 acres. Subsequent inventory records from 1999 and 2001 listed the parking lot at 0.4 acres and also continued to list the ambassador’s residence as 3.01 acres. In addition to the contact named above, Janey Cohen, Ed Kennedy, Jesus Martinez, Michael Rohrback, and Richard Seldin made key contributions to this report.
The U.S. government owns about 3,500 properties overseas at more than 220 locations, including embassy and consular office buildings, housing, and land. The Department of State is responsible for acquiring, managing, and disposing of these properties. In 1996, GAO reported that the State Department did not have an effective process for identifying and selling unneeded overseas real estate, and that decisions concerning the sale of some properties had been delayed for years because of parochial conflicts among the parties involved. The State Department has taken steps to implement a more systematic process for identifying unneeded properties by (1) requesting posts to annually identify excess, underutilized, and obsolete property and (2) requesting its own staff and Inspector General officials to place greater emphasis on identifying such property when they visit posts. The State Department has significantly increased its sales of unneeded properties in the last 5 years. From 1997 through 2001, it sold 104 overseas properties for over $404 million, almost triple the proceeds compared with the previous 5 year period. However, the department still has a large number of unneeded properties that have not yet been sold. The State Department has not effectively implemented recommendations made by the Real Property Advisory Board to sell unneeded property. State has disposed of only 7 properties of the 26 recommended for sale by the board.
In 1986, IRCA established the employment verification process based on employers’ review of documents presented by employees to prove identity and work eligibility. On the Form I-9, employees must attest that they are U.S. citizens, lawfully admitted permanent residents, or aliens authorized to work in the United States. Employers must then certify that they have reviewed the documents presented by their employees to establish identity and work eligibility and that the documents appear genuine and relate to the individual presenting them. In making their certifications, employers are expected to judge whether the documents presented are obviously counterfeit or fraudulent. Employers generally are deemed in compliance with IRCA if they have followed the Form I-9 process in good faith, including when an unauthorized alien presents fraudulent documents that appear genuine. Following the passage of IRCA in 1986, employees could present 29 different documents to establish their identity and/or work eligibility. In a 1997 interim rule, the former U.S. Immigration and Naturalization Service (INS) reduced the number of acceptable work eligibility documents from 29 to 27. The Illegal Immigration Reform and Immigrant Responsibility Act (IIRIRA) of 1996 required the former INS and SSA to operate three voluntary pilot programs to test electronic means for employers to verify an employee’s eligibility to work, one of which was the Basic Pilot Program. The Basic Pilot Program was designed to test whether pilot verification procedures could improve the existing employment verification process by reducing (1) false claims of U.S. citizenship and document fraud, (2) discrimination against employees, (3) violations of civil liberties and privacy, and (4) the burden on employers to verify employees’ work eligibility. In 2007, USCIS renamed the Basic Pilot Program the Employment Eligibility Verification (EEV) program. EEV provides participating employers with an electronic method to verify their employees’ work eligibility. Employers may participate voluntarily in EEV, but are still required to complete Forms I-9 for all newly hired employees in accordance with IRCA. After completing the forms, these employers query EEV’s automated system by entering employee information provided on the forms, such as name and Social Security number, into the EEV Web site within 3 working days of the employees’ hire date. The program then electronically matches that information against information in SSA’s NUMIDENT database and, for noncitizens, DHS databases to determine whether the employee is eligible to work. EEV electronically notifies employers whether their employees’ work authorization was confirmed. Those queries that the DHS automated check cannot confirm are referred to DHS immigration status verifiers, who check employee information against information in other DHS databases. The EEV process is shown in figure 1. In cases when EEV cannot confirm an employee’s work authorization status either through the automatic check or the check by an immigration status verifier, the system issues the employer a tentative nonconfirmation of the employee’s work authorization status. In this case, the employers must notify the affected employees of the finding, and the employees have the right to contest their tentative nonconfirmations by contacting SSA or USCIS to resolve any inaccuracies in their records within 8 days. During this time, employers may not take any adverse actions against those employees, such as limiting their work assignments or pay. After 10 days, employers are required to either immediately terminate the employment or notify DHS of the continued employment of workers who do not successfully contest the tentative nonconfirmation and those who the pilot program finds are not work-authorized. The EEV program is a part of USCIS’s Systematic Alien Verification for Entitlements Program, which provides a variety of verification services for federal, state, and local government agencies. USCIS estimates that there are more than 150,000 federal, state, and local agency users that verify immigration status through the Systematic Alien Verification for Entitlements Program. SSA also operates various verification services. Among these are the Employee Verification Service (EVS) and the Web- based SSN Verification Service (SSNVS), which can be used to provide verification that employees’ names and Social Security numbers match SSA’s records. These services, designed to ensure accurate employer wage reporting, are offered free of charge. Employer use is voluntary, and the services are not widely used. Mandatory electronic employment verification would substantially increase the number of employers using the EEV system, which would place greater demands on USCIS and SSA resources. As of May 2007, about 17,000 employers have registered to use the program, 8,863 of which were active users, and USCIS has estimated that employer registration is expected to greatly increase by the end of fiscal year 2007. If participation in the EEV program were made mandatory, the program may have to accommodate all of the estimated 5.9 million employers in the United States. USCIS officials estimate that to meet a December 2008 implementation date, this could require about of 30,000 employers to register with the system per day. The mandatory use EEV can affect the capacity of the system because of the increased number of employer queries. USCIS has estimated that a mandatory EEV could cost USCIS $70 million annually for program management and $300 million to $400 million annually for compliance activities and staff. The costs associated with other programmatic and system enhancements are currently unknown. According to USCIS, cost estimates will rise if the number of queries rises, although officials noted that the estimates may depend on the method for implementing a mandatory program. SSA officials told us they have estimated that expansion of the EEV program to levels predicted by the end of fiscal year 2007 would cost $5 to $6 million, but SSA was not yet able to provide us estimates for the cost of a mandatory EEV. According to SSA officials, the cost of a mandatory EEV would be driven by the increased workload of its field office staff due to resolving SSA tentative nonconfirmations. A mandatory EEV would require an increase in the number of USCIS and SSA staff to operate the program. For example, USCIS had 13 headquarters staff members in 2005 to run the program and 38 immigration status verifiers available for secondary verification. USCIS plans to increase staff levels to 255 to manage a mandatory program, which includes increasing the number of immigration status verifiers who conduct secondary verifications. USCIS officials expressed concern about the difficulty in hiring these staff due to lengthy hiring processes, which may include government background checks. In addition, according to SSA officials, a mandatory EEV program would require additional staff at SSA field offices to accommodate an increase in the number of individuals visiting SSA field offices to resolve tentative nonconfirmations. According to SSA officials, the number of new staff required would depend on both the legislative requirements for implementing mandatory EEV and the effectiveness of efforts USCIS has under way to decrease the need for individuals to visit SSA field offices. For this reason, SSA officials told us they have not yet estimated how many additional staff they would need for a mandatory EEV. In prior work, we reported that secondary verifications lengthen the time needed to complete the employment verification process. The majority of EEV queries entered by employers—about 92 percent—confirm within seconds that the employee is authorized to work. About 7 percent of the queries are not confirmed by the initial automated check and result in SSA-issued tentative nonconfirmations, while about 1 percent result in DHS-issued tentative nonconfirmations. With regard to the SSA-issued tentative nonconfirmations, USCIS and SSA officials told us that the majority occur because employees’ citizenship status or other information, such as name changes, is not up to date in the SSA database. SSA does not update records unless an individual requests the update in person and submits the required evidence to support the change in its records. USCIS officials stated that, for example, when aliens become naturalized citizens, their citizenship status is often not updated in the SSA database. In addition, individuals who have changed their names for various reasons, such as marriage, without notifying SSA in person may also be issued an SSA tentative nonconfirmation. According to SSA officials, although SSA instructs individuals to report any changes in name, citizenship, or immigration status, many do not do so. When these individuals’ information is queried through EEV, a tentative nonconfirmation would be issued, requiring them to go to an SSA field office to show proof of the change and to correct their records in SSA’s database. USCIS and SSA are exploring some options to improve the efficiency of the verification process. For example, USCIS is exploring ways to automatically check for naturalized citizens’ work authorization using DHS databases before the EEV system issues a tentative nonconfirmation. Furthermore, USCIS is planning to provide naturalized citizens with the option, on a voluntary basis, to provide their Alien Number or Naturalization Certification Number so that employers can query that information through the EEV system before referring the employees to SSA to resolve tentative nonconfirmations. SSA is also coordinating with USCIS to develop an automated secondary verification capability, which may reduce the need for employers to take additional steps after the employee resolves the SSA tentative nonconfirmation. USCIS and SSA officials told us that the agencies are planning to provide SSA field office staff with access to the EEV system so that field office staff can resolve the SSA tentative nonconfirmation directly in the system at the time the employee’s record is updated at the field office. According to SSA officials, the automated secondary verification capability is tentatively scheduled to be implemented by October 2007. While these steps may help improve the efficiency of the verification process, including eliminating some SSA tentative nonconfirmations, they will not entirely eliminate the need for some individuals to visit SSA field offices to update records when individuals’ status or other information changes. USCIS and SSA officials noted that because the current EEV program is voluntary, the percentage of individuals who are referred to SSA field offices to resolve tentative nonconfirmations may not accurately indicate the number of individuals who would be required to do so under a mandatory program. SSA and USCIS officials expressed concern about the effect on SSA field offices’ workload of the number of individuals who would be required to physically visit a field office if EEV were made mandatory. In our prior work, we reported that EEV enhances the ability of participating employers to reliably verify their employees’ work eligibility and assists participating employers with identification of false documents used to obtain employment. If newly hired employees present false information, EEV would not confirm the employees’ work eligibility because their information, such as a false name or social security number, would not match SSA and DHS database information. However, the current EEV program is limited in its ability to help employers detect identity fraud, such as cases in which an individual presents borrowed or stolen genuine documents. USCIS has taken steps to reduce fraud associated with the use of documents containing valid information on which another photograph has been substituted for the document’s original photograph. In March 2007, USCIS began piloting a photograph screening tool as an addition to the current EEV system. According to USCIS officials, the photograph screening tool is intended to allow an employer to verify the authenticity of a Lawful Permanent Resident card (green card) or Employment Authorization Document that contain photographs of the document holder by comparing individuals’ photographs on the documents presented during the I-9 process to those maintained in DHS databases. As of May 2007, about 70 employers have been participating during the pilot phase of the photograph screening tool, and EEV has processed about 400 queries through the tool. USCIS expects to expand the program to all employers participating in EEV by the end of summer 2007. The use of the photograph screening tool is currently limited because newly hired citizens and noncitizens presenting forms of documentation other than green cards or Employment Authorization Documents to verify work eligibility are not subject to the tool. Expansion of the pilot photograph screening tool would require incorporating other forms of documentation with related databases. In addition, efforts to expand the tool are still in the initial planning stages. For example, according to USCIS officials, USCIS and the Department of State have begun exploring ways to include visa and U.S. passport documents in the tool, but these agencies have not yet reached agreement regarding the use of these documents. USCIS is also exploring a possible pilot program with state Departments of Motor Vehicles. In prior work we reported that although not specifically or comprehensively quantifiable, the prevalence of identify fraud seemed to be increasing, a development that may affect employers’ ability to reliably verify employment eligibility in a mandatory EEV program. The large number and variety of acceptable work authorization documents—27 under the current employment verification process—along with inherent vulnerabilities to counterfeiting of some of these documents, may complicate efforts to address identity fraud. Although mandatory EEV and the associated use of the photograph screening tool offers some remedy, further actions, such as reducing the number of acceptable work eligibility documents and making them more secure, may be required to more fully address identity fraud. EEV is vulnerable to acts of employer fraud, such as entering the same identity information to authorize multiple workers. Although ICE has no direct role in monitoring employer use of EEV and does not have direct access to program information, which is maintained by USCIS, ICE officials told us that program data could indicate cases in which employers may be fraudulently using the system and therefore would help the agency better target its limited worksite enforcement resources toward those employers. ICE officials noted that, in a few cases, they have requested and received EEV data from USCIS on specific employers who participate in the program and are under ICE investigation. USCIS is planning to use its newly created Compliance and Monitoring program to refer information on employers who may be fraudulently using the EEV system, although USCIS and ICE are still determining what information is appropriate to share. Employees queried through EEV may be adversely affected if employers violate program obligations designed to protect the employees, by taking actions such as limiting work assignments or pay while employees are undergoing the verification process. The 2004 Temple University Institute for Survey Research and Westat evaluation of EEV concluded that the majority of employers surveyed appeared to be in compliance with EEV procedures. However, the evaluation and our prior review found evidence of some noncompliance with these procedures. In 2005, we reported that EEV provided a variety of reports that could help USCIS determine whether employers followed program requirements, but that USCIS lacked sufficient staff to do so. Since then, USCIS has added staff to its verification office and created a Compliance and Monitoring program to review employers’ use of the EEV system. However, while USCIS has hired directors for these functions, the program is not yet fully staffed. According to USCIS officials, USCIS is still in the process of determining how this program will carry out compliance and monitoring functions, but its activities may include sampling employer usage data for evidence of noncompliant practices, such as identifying employers who do not appear to refer employees contesting tentative nonconfirmations to SSA or USCIS. USCIS estimates that the Compliance and Monitoring program will be sufficiently staffed to begin identifying employer noncompliance by late summer 2007. USCIS’s newly created Compliance and Monitoring program could help ICE better target its worksite enforcement efforts by indicating cases of employers’ egregious misuse of the system. Currently, there is no formal mechanism for sharing compliance data between USCIS and ICE. ICE officials noted that proactive reduction of illegal employment through the use of functional, mandatory EEV may help reduce the need for and better focus worksite enforcement efforts. Moreover, these officials told us that mandatory use of an automated system like EEV could limit the ability of employers who knowingly hired unauthorized workers to claim that the workers presented false documents to obtain employment, which could assist ICE agents in proving employer violations of IRCA. Although efforts to reduce the employment of unauthorized workers in the United States necessitate a strong employment eligibility verification process and a credible worksite enforcement program and other immigration reforms may be dependent on it, a number of challenges face its successful implementation. The EEV program shows promise for enhancing the employment verification process and reducing document fraud if implemented on a much larger scale, and USCIS and SSA have undertaken a number of steps to address many of the weaknesses we identified in the EEV program. USCIS has also spent the last several years planning for an expanded or mandatory program, and has made progress in several areas, but it is unclear at this time the extent to which USCIC’s efforts will be successful under mandatory EEV. It is clear, however, that a mandatory EEV system will require a substantial investment in staff and other resources, at least in the near term, in both agencies. There are also issues, such as identity fraud and intentional misuse, that will remain a challenge to the system. Implementing an EEV system to ensure that all individuals working in this country are doing so legally and that undue burdens are not placed on employers or employees will not be an easy task within the timelines suggested in reform proposals. This concludes my prepared statement. I would be pleased to answer any questions you and the subcommittee members may have. For further information about this testimony, please contact Richard Stana at 202-512-8777. Other key contributors to this statement were Blake Ainsworth, Frances Cook, Michelle Cooper, Rebecca Gambler, Kathryn Godfrey, Lara Laufer, Shawn Mongin, Justin L. Monroe, John Vocino, Robert E. White, and Paul Wright. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The opportunity for employment is one of the most powerful magnets attracting illegal immigration to the United States. The Immigration Reform and Control Act of 1986 established an employment eligibility verification process, but immigration experts state that a more reliable verification system is needed. In 1996, the former U.S. Immigration and Naturalization Service, now within the Department of Homeland Security (DHS), and the Social Security Administration (SSA) began operating a voluntary pilot program, called the Employment Eligibility Verification (EEV) program, to provide participating employers with a means for electronically verifying employees' work eligibility. Congress is considering various immigration reform proposals, some of which would require all employers to electronically verify the work authorization status of their employees at the time of hire. In this testimony GAO provides observations on the EEV system's capacity, data reliability, ability to detect fraudulent documents and identity theft, and vulnerability to employer fraud as well as challenges to making the program mandatory for all employers. This testimony is based on our previous work regarding the employment eligibility verification process and updated information obtained from DHS and SSA. A mandatory EEV program would substantially increase the number of employers using the system. As of May 2007, about 17,000 employers have registered to use the current voluntary EEV program, about half of which are active users. If participation in EEV were made mandatory, the approximately 5.9 million employers in the United States may be required to participate. Requiring all employers to use EEV would substantially increase the demands on DHS and SSA resources. DHS estimated that increasing the capacity of EEV could cost it $70 million annually for program management and $300 million to $400 million annually for compliance activities and staff. SSA officials estimated that expansion of the EEV program through this fiscal year would cost $5 million to $6 million and noted that the cost of mandatory EEV would be much higher and driven by increased workload of its field office staff that would be responsible for resolving queries that SSA cannot immediately confirm. DHS and SSA are exploring options to reduce delays in the EEV process. The majority of EEV queries entered by employers--about 92 percent--confirm within seconds that the employee is work authorized. About 7 percent of the queries cannot be immediately confirmed by SSA, and about 1 percent cannot be immediately confirmed by DHS. Resolving these nonconfirmations can take several days, or in a few cases even weeks. DHS and SSA are considering options for improving the system's ability to perform additional automated checks to immediately confirm work authorization, which may be important should EEV be mandatory. EEV may help reduce document fraud, but it cannot yet fully address identity fraud issues, for example, when employees present borrowed or stolen genuine documents. The current EEV program is piloting a photograph screening tool, whereby an employer can more easily identify fraudulent documentation. DHS expects to expand the use of this tool to all participating employers by September 2007. Although mandatory EEV and the associated use of the photograph screening tool offer some remedy, limiting the number of acceptable work authorization documents and making them more secure would help to more fully address identity fraud. The EEV program is vulnerable to employer fraud, such as entering the same identity information to authorize multiple workers. EEV is also vulnerable to employer misuse that adversely affects employees, such as employers limiting work assignments or pay while employees are undergoing the verification process. DHS is establishing a new Compliance and Monitoring program to help reduce employer fraud and misuse by, for example, identifying patterns in employer compliance with program requirements. Information suggesting employers' fraud or misuse of the system could be useful to other DHS components in targeting limited worksite enforcement resources and promoting employer compliance with employment laws.
The Communications Act of 1934 first established the nation’s telecommunications policy, including making communications services available “so far as possible, to all the people of the United States.” Since the cost of providing telephone service in rural areas is generally higher than the cost of providing service in central cities of metropolitan areas, universal service policy has traditionally targeted financial support to rural and other high-cost areas. In the Telecommunications Act of 1996 (the 1996 Act), Congress specified that consumers in “rural, insular, and high- cost areas” should have access to telecommunication rates and services that are “reasonably comparable” to consumers in urban areas. The 1996 Act established a Federal-State Joint Board on Universal Service (Joint Board), which is composed of three FCC commissioners, four state regulatory commissioners, and a consumer advocate. The Joint Board makes recommendations to FCC on implementing the universal service related provisions of the 1996 Act. The 1996 Act also altered the federal mechanism for funding universal service by requiring telecommunications carriers and other entities providing interstate telecommunications service to contribute to USF, unless exempted by FCC. The carriers generally pass these costs on to customers, sometimes in the form of a line item on customer’s telephone bills. According to FCC, the average cost to each household in America is about $2.73 per month. The contributions are deposited into the USF and distributed to the telecommunications carriers that provide service. USF provides financial support (i.e., subsidies) through four different programs, each targeting a particular group of telecommunications users (see table 1). In 2011, support for the four programs totaled $8 billion, and the high-cost program accounted for the largest amount of support—$4 billion, or 50 percent of USF support. The high-cost program directly and indirectly supports basic telephone (i.e., fixed wireline), broadband, and wireless telephone (i.e., mobile) services. To make these services universally available, the high-cost program offers support to both wireline and wireless carriers operating in high-cost areas—generally rural—to offset costs, thereby allowing these carriers to provide rates and services that are comparable to the rates and services that consumers in low cost areas—generally urban—receive. Consequently, while urban consumers pay the full cost of their service, many rural consumers receive services that are subsidized by the high-cost fund. The USF support a carrier can receive depends on various factors, including its status as either the incumbent or a competitor, and the number of lines it claims in its service territory. Incumbent carriers are telephone carriers for a given service area that were in existence when Congress passed the 1996 Act and were members of NECA. These incumbent carriers are further classified as either “rural”—generally small carriers serving primarily rural areas—or “nonrural”—generally large carriers serving both rural and urban areas. Many small rural carriers are subject to rate-of-return regulation, while nonrural carriers are usually larger and subject to price-cap regulations and provide service to approximately 95 percent of U.S. households, according to FCC officials. Federal and state governments play a role in implementing the federal high-cost program, as do not-for-profit corporations and associations. FCC has overall responsibility for the federal high-cost program, including making and interpreting policy, overseeing program operations, and ensuring compliance with its rules. However, FCC delegated to USAC responsibility to administer the day-to-day operations of the high-cost program. State regulatory commissions hold the primary responsibility to determine carrier eligibility for program participation (i.e., states designate eligibility status of carriers) and to annually certify that carriers will appropriately use high-cost program support. Table 2 summarizes the general roles and responsibilities of the agencies and organizations involved in high-cost program administration. The 2010 National Broadband Plan provided a road map for FCC to reform the high-cost program, among other USF programs, to ensure that all Americans have access to broadband-capable networks. The National Broadband Plan concluded that millions of Americans do not have access to broadband infrastructure at the target of 4 megabits per second (Mbps) download and 1 Mbps upload. The plan recommended, among other things, creating a Connect America Fund to address broadband availability gaps in unserved areas. The plan also recommended creating a Mobility Fund to provide support for deployment of a wireless network. As we previously reported, implementing the plan’s recommendations and ensuring universal broadband availability will be challenging, and it remains to be seen whether and how effectively FCC will be able to address these challenges and implement the plan’s recommendations. FCC adopted new rules to fundamentally change the high-cost program by extending the program to support broadband capable networks. According to FCC, the new rules will not adversely affect traditional voice services; rather the changes will ensure that affordable voice and broadband services are available to all Americans by 2017. The new rules also addressed multiple recommendations from the National Broadband Plan. See appendix III for the status of FCC’s efforts related to those recommendations and a timeline for implementing the new rules. In adopting the USF Transformation Order, FCC said it would control the size of the fund as it transitions to support broadband and require accountability from carriers receiving support to ensure that public investments are used wisely to deliver intended results. The order outlines the following rules intended to improve the high-cost program and enable it to support broadband capable networks: Establishing a program budget for the first time. FCC set a budget of $4.5 billion annually over the next 6 years by taking a number of actions, including placing a cap on total per-line support, freezing certain support for service providers at current levels, eliminating or phasing down certain types of support, and setting caps for rate-of-return carriers’ capital and operating expenses. FCC also established an automatic review trigger if the program budget is threatened to be exceeded. Specifically, the USF Transformation Order states that if program demand exceeds the annualized $4.5 billion budget over any consecutive 4 quarters once fund reserves are exhausted, FCC will initiate a process to bring demand back under budget. According to FCC, the $4.5 billion, which was set at the 2011 estimated level of support, will provide a predictable funding level for carriers and protect consumers and businesses that ultimately pay for the fund as FCC expands the program to support broadband. In the past, the high-cost program was not constrained by a specified level of funding and we and other stakeholders have previously raised concerns about the growing size of the program. The National Broadband Plan recommended that FCC try to keep the overall size of the fund close to its current size (in 2010 dollars) and FCC stated that the budget will help to ensure that consumers will not pay more in contributions given the new program rules. Creating the Connect America Fund. FCC created the Connect America Fund, which will ultimately replace the high-cost fund, to make both wireline and wireless broadband available in unserved areas. Within the Connect America Fund, FCC established support for mobile voice and broadband services, recognizing that promoting universal availability of mobile services is a vital component of universal service. Specifically, FCC established the Mobility Fund, which is the first universal service mechanism dedicated to ensuring availability of mobile voice and broadband services in areas where service is currently not available. In 2012, FCC dedicated $300 million (one-time) for extending wireless coverage in unserved areas and $500 million annually for ongoing support for mobile voice and broadband service. Establishing public interest obligations for all eligible carriers. Previously, carriers were required to meet state public interest obligations and limited federal duties as eligible telecommunications carriers to receive USF support payments; however, carriers were not required to meet any specific performance standards in exchange for receiving the funds. Under the USF Transformation Order, FCC requires all carriers to offer broadband services in their supported service areas, meet certain broadband performance requirements, and report regularly on associated broadband performance measures. For instance, one of the broadband performance requirements is for carriers providing service to fixed locations to offer actual download speeds of at least 4 Mbps and upload speeds of at least 1 Mbps to broadband subscribers. In the USF Transformation Order, FCC changed its method for distributing funds to carriers to address some of the recognized program inefficiencies. According to FCC, these changes will allow it to reduce high-cost support for carriers providing only voice services and make funds available to carriers for the deployment of both voice and broadband-capable networks. Since many of these changes have yet to be implemented, it is too early to assess their effectiveness. In the order, FCC took the following actions: Eliminated the identical support rule. To encourage competition among carriers in rural areas, in 1997 FCC enacted the identical support rule. At that time, FCC concluded that it would be inconsistent with the statute and the competitive goals of the 1996 Act to exclude any providers (regardless of the technology used for providing voice service) from receiving universal service support and therefore determined that universal service support should be available to all carriers that met the eligibility requirements, including competitive carriers that offered service via satellite or other wireless technology. Under this model, incumbent carriers received support based on their costs of providing service in an area or from FCC’s cost model and competitive carriers received the same amount of support per line served as the incumbent, regardless of whether the competitor needed that same amount of support to provide service. FCC assumed that high-cost support would be given to the most efficient and competitive carriers providing fixed, wireline telephone service (not mobile wireless providers), as they attracted customers from the incumbent carriers in a competitive marketplace. FCC anticipated that as the number of subscribers taking service from a more efficient competitor increased, the number of subscribers taking service from the incumbent would decrease, thereby decreasing the amount of support FCC paid to the incumbent providers. However, the vast majority of support payments for competitors went to wireless carriers and rather than providing a complete substitute for traditional wireline service, wireless competitors largely provided mobile voice service to customers who also had wireline service. Thus, FCC ended up paying support for both incumbents and competitors serving the same area, which caused disbursements from the fund to increase dramatically. In the USF Transformation Order, FCC acknowledged that the existing system of providing high-cost support to competitive carriers that were serving the same customers as the incumbent providers was inefficient and the identical support rule failed to efficiently target support payments to where they were most needed. By eliminating the identical support rule, FCC can stop paying competitive carriers providing voice services and make those funds available for fixed and mobile voice and broadband services in targeted areas, including areas unserved by broadband. Several of the stakeholders and economists we contacted supported FCC’s decision to eliminate the identical support rule, noting that it was inefficient and ineffective. Starting January 1, 2012, FCC froze support for each competitive carrier at the 2011 monthly baseline amount. Beginning July 1, 2012, FCC stated it would reduce support for each competitive carrier by 20 percent annually for the next 5 years, with the aim of fully eliminating support by July 1, 2016. Eliminated support in areas with 100 percent overlap. FCC also eliminated high-cost support for incumbent carriers in areas where an unsubsidized competitor—or a combination of unsubsidized competitors—also provides voice and broadband in the same service area, known as 100 percent overlap. During the course of its proceedings, FCC found that in many areas of the country, the high-cost program provided more support than necessary to achieve its goals by “subsidizing a competitor to a voice and broadband provider that was offering service without government assistance.” Significant improvements in technology have made it possible for some cable operators to offer many services, including both voice and broadband. As such, cable operators have become unsubsidized competitors, offering both voice and broadband services in the same service areas as incumbent carriers. A report commissioned by the National Cable and Telecommunications Association found that $504 million of high-cost support went to 277 rural incumbent carriers’ service area in which unsubsidized cable voice service was available to more than half of all households. The report also found that in many areas, cable operators offer voice service to more than 75 percent of the households, and in some cases they offer service to 90-100 percent of households in an incumbent carriers’ study area. Documents FCC made available to a congressional committee also showed evidence that other carriers, both wireless and wireline, provide service in high-cost areas but do not receive high-cost support. For example, in an area in which the incumbent carrier received $1.7 million (almost $13,000 per line) annually in 2009, four wireless carriers provided voice service to more than 90 percent of that carrier’s service area without receiving USF support. FCC acknowledged that providing high-cost support in areas of the country where another voice and broadband provider offers high-quality service without government assistance is an inefficient use of high-cost support, and therefore plans to eliminate support in areas with 100 percent overlap service. An economist we contacted raised concerns on how FCC will identify and eliminate support for incumbent carriers in areas where unsubsidized competitors provide coverage. Details on the methodology and data to be used for determining overlap areas are currently unknown, but FCC plans to phase out, over a 3 year period, all support for incumbent carriers in those areas where unsubsidized competitors offer voice and broadband services for 100 percent of the residential and business locations in the incumbent’s service area. Established a new method to distribute funds to price-cap carriers. Prior to the USF Transformation Order, FCC distributed high-cost support to price-cap carriers through multiple mechanisms: for example, in some areas FCC used a cost model to determine the costs of providing service in a specific area, while in other areas, support was based on actual cost of service. FCC recognized that this method of distributing high-cost funds needed to be changed to accelerate broadband deployment in unserved areas. Therefore, FCC changed the rules to (1) freeze the amount of high-cost support distributed to the price-cap carriers at the 2011 support level, and (2) starting when there are model-set support amounts and auction rules in place (which FCC anticipated would be in January 2013) and for the next 5 years, employ a new model and competitive bidding to support networks that can provide both voice and Specifically, FCC plans to develop a model that broadband services. can be used for each census block in high-cost areas to determine the amount of support required to extend and sustain a broadband-capable network. Each incumbent price-cap carrier will have the opportunity to accept the annual support derived from the model in each state in which it operates. In exchange for accepting the support, a carrier must continue providing voice service, commit to deploying broadband service, and meet public interest obligations associated with all the eligible census blocks in its territory. If an incumbent price-cap carrier declines, then FCC will put the service area up for competitive bid. The winning bidder will be required to provide voice and broadband services, and will receive the amount of support the carrier bid to provide service. Stakeholders we contacted had mixed views on FCC’s plans for using both a model and competitive bidding. Several economists we interviewed commented that while FCC’s planned model may be an improvement over the previous distribution mechanism, it may not be the most effective way for distributing support because using a model is data intensive and requires accurate and reliable data from carriers. On the other hand, telecommunications stakeholders commented that if the variables used in the model are relatively accurate, the model may ensure that support is properly targeted to the areas most in need. Changed the method for determining support levels for rate-of- return carriers. Prior to the USF Transformation Order, rate-of-return carriers received funding from the high-cost fund based on their actual costs. Under the old rules, some carriers were reimbursed for up to 100 percent of their eligible expenditures, faced no FCC imposed limits, and had no incentive to be more efficient. Under the new rules, FCC is taking multiple actions to target support for investments in broadband, increase accountability, and increase incentives for efficient use of public resources. The reform measures include limiting reimbursements for capital and operating expenses, and establishing an overall cap on the amount of support, totaling $250 per line per month, or $3,000 annually. The cap will be phased in over a 3-year period. Some economists we spoke with commented that it does not go far enough to make the mechanism more efficient. Two economists told us that if the reform were to have any impact, the cap needed to be further reduced to $100 per line per month. FCC has also adopted a rule to limit support to carriers whose end-user rates (i.e., basic telephone rates that carriers charge their customers) do not meet a local rate floor. During the course of its proceedings, FCC found that some carriers receiving high-cost support were offering basic voice plans as low as $5 per month, in comparison to the 2008 national average local rate of $15.62. The law requires that urban and rural rates be reasonably comparable, which FCC has implemented by requiring that rural consumers pay no more than two standard deviations above the average of what urban consumers pay for the same level of voice service. To address this inefficiency, FCC has adopted a rule to reduce high-cost support for carriers whose end-user rates for voice service do not meet the local rate floor. Furthermore, to help ensure that the reform efforts do not adversely affect traditional voice service, FCC developed a waiver process for carriers that contend the reforms will affect their ability to provide reasonably comparable service at reasonably comparable rates if FCC reduces their current support levels. In petitioning FCC for a waiver, a carrier must clearly demonstrate that good cause exists for exempting it from some or all of the reforms, and that the waiver is necessary and in the public interest to ensure that consumers in the area continue to receive voice service. FCC cautioned that those seeking a waiver would be subject to a rigorous review, including an accounting of all revenues that the carrier receives. However, for those carriers receiving a waiver, FCC has not yet determined if it would impose a ceiling on the amount of support a carrier could receive per line. We and OMB have each issued a report in the last 7 years critical of FCC’s management of the high-cost fund and in the USF Transformation Order, FCC has taken several steps to address these challenges. The management challenges we identified included a lack of performance goals and measures for the program and weak internal controls, resulting in FCC’s limited ability to oversee the actions of carriers or the data they provide. In 2005, OMB criticized FCC’s inability to measure the effect of the fund on subscribership in rural areas or to base funding decisions on any indication of measurable benefits. To address these challenges, FCC has (1) established performance goals and measures for the high- cost program, (2) improved its internal control mechanisms over the fund, and (3) directed USAC to undertake additional oversight and management actions. In 2008, we reported that FCC lacked specific performance goals or measures for the high-cost program. OMB reported that the high-cost program neither measures the impact of funds on telephone subscribership in rural areas nor bases funding decisions on measureable benefits. As a result, after spending more than $41.1 billion in high-cost funds since 2001, we reported that it was still unclear what FCC had achieved through the program. In our report in 2008, we recommended that FCC establish short- and long-term performance goals and measures to make clear the program’s intentions and accomplishments. As shown in table 3, FCC developed five performance goals and three performance measures for the high-cost program in the USF Transformation Order. As of July 2012, FCC was still formulating measures for the remaining two goals. In 2008, we also reported weaknesses in FCC’s internal control mechanisms, including the carrier certification process, carrier audits, and carrier data validation. State officials’ annual certification of carriers is the primary tool used to determine if carriers are operating according to the high-cost fund’s guidelines. However, because the certification requirements were not standardized across states, carriers have been subject to varying levels of oversight. Audits of carriers are the primary tool used to oversee carrier activities, and audits may be conducted by USAC, state regulatory commissions, or FCC’s Office of Inspector General. In 2008, we reported that from 2002 to 2008, USAC had conducted about 17 audits, from more than 1,400 carriers participating annually in the high-cost program. We also found in a survey that 7 out of 50 state regulatory commissions reported auditing incumbent carriers. Based on these findings, among others, we determined that FCC’s internal controls were weak and that its ability to adequately oversee the high-cost program was hindered. In addition, neither FCC nor USAC had audited the carrier-reported data for accuracy, and they did not follow up to assess whether the actions carriers claimed they were taking with regard to using high-cost support were consistent with the actions they actually were taking. We recommended that FCC identify areas of risk in its internal control environment and implement mechanisms to help ensure carriers’ compliance with program rules. In the USF Transformation Order, FCC addressed all three of the areas we discussed in our 2008 report. To standardize the certification requirements and bring more scrutiny to the data reported by carriers, FCC established a national oversight framework that will be implemented as a partnership between FCC and the states, U.S. territories, and tribal governments. This framework will include annual reporting and certification requirements for all carriers receiving universal service funds and is designed to provide federal and state regulators with the information needed to determine whether recipients are using support for the intended purposes. Under the new standards, all carriers must include in their annual reports to FCC and their respective state commissions a progress report on their 5-year build-out plans, data, and explanatory text concerning outages, unfulfilled requests for service, and complaints received. They must also certify compliance with applicable service quality and consumer protection standards and further certify their ability to function in emergency situations. To address the lack of audits on the part of FCC and USAC, FCC directed USAC to review and enhance two programs that are intended to oversee and safeguard USF. FCC developed these programs in coordination with OMB in 2010 to ensure that recipients of USF support comply with FCC rules, and to prevent and detect waste, fraud, and abuse. FCC expects that these audits will verify the accuracy of the underlying data and address our previously reported concern that FCC does not validate the accuracy of data reported by carriers. Additionally, FCC directed USAC to annually assess compliance with the new requirements established for Connect America Fund recipients and test the accuracy of carriers’ certifications.improve FCC’s oversight of program funds, it is too soon to assess their effectiveness. While FCC has taken steps to address several shortcomings of the high- cost program, our review of the order has identified gaps in FCC’s plans to better oversee the program and make it more effective and efficient. Specifically, we determined that FCC lacks (1) a data-analysis plan for carrier data it will collect, and (2) a mechanism to link carrier rates and revenues with USF support payments. In the past, FCC had no way to measure the effectiveness of the high- cost program because it did not collect adequate data at the service area level, i.e., a geographic area served by a specific carrier that would allow FCC to measure the effect of the funds by carrier on subscribership As a result, FCC did not know if high-cost funds were achieving levels.their intended purpose. Economists have pointed out that to determine if high-cost funds were achieving their intended purpose, FCC would need to determine whether the provision of funds had caused an increase in the level of subscribership that would not have occurred in the absence of the funds. To assess program effectiveness, FCC would need to collect data showing the outcomes (i.e., the change in the level of telephone subscriptions) in study areas that used these funds as well as the outcomes in study areas where these funds were not used. Under the USF Transformation Order, FCC will start collecting data from carriers that receive Connect America Fund monies on (1) the amount of funding the carriers received, (2) their build-out of infrastructure for broadband capable networks, and (3) service quality and speed in the level of broadband service provided. According to the order, FCC is collecting the information to monitor progress in achieving its broadband goals and to assist FCC in determining whether the funds are being used appropriately. However, FCC’s order does not articulate a specific data- analysis plan for the carrier data it will collect and it is unclear if or how FCC plans to use the data. We have previously noted that sound program evaluation should include a detailed data-analysis plan to track the program’s performance and evaluate its final results. Lacking such an evaluation, the achievements and overall effectiveness of the Connect America Fund are less likely to be clear and FCC might not have the analysis to determine what changes should be made to improve the program. Analyzing the carrier data could enable FCC to determine the program’s effectiveness because the analysis would provide some definitive examples of the connection between the level of subsidy and the specific demographic factors of the service areas that have shown an increase in broadband access. Furthermore, such analysis would enable FCC to adjust the size of the Connect America Fund based on sound evaluation and would allow Congress and FCC to make better informed decisions about the future of the program and how program efficiency could be improved. Although FCC plans to determine the number of residential, business, and community anchor institution locations that have newly gained access to broadband service per $1 million spent in USF subsidy, such an evaluation does not provide any direct link between an increase in broadband access and funding subsidies provided by the Connect America Fund. In other words, FCC will know the extent to which broadband access has changed over time, but it will not know what factors have influenced the change. One of FCC’s performance goals (and a requirement in statute) is to ensure that rates for broadband and voice services are reasonably comparable in all regions of the country. FCC has defined voice rates as being reasonably comparable if the rural rate is equal to or greater than the average urban rate but not by more than two standard deviations. However, in the USF Transformation Order, FCC reported that many rural carriers are offering basic local rates for telephone service that are lower than the average basic local rate paid by urban consumers. In fact, FCC cited data submitted by NECA which summarized 2008 residential rates for over 600 companies — a broad cross-section of carriers that typically receive universal service support — showing that approximately 60 percent of those carriers offered pricing plans that were below the 2008 national average local rate of $15.62. (According to FCC information published in 2008, if the average urban rate plus federal and state charges were $25.62, rural rates plus federal and state charges could be as high as $36.52.) Two of the economists we contacted have written on the inequity of this urban-rural rate difference, stressing that an effect of this inequity could be the transfer of wealth from poor urban consumers who pay into the fund but receive no subsidy, to wealthy rural consumers who benefit from subsidized rates. USF Transformation Order, para 235 & 240. consumers across the country to subsidize the cost of service for some consumers that pay local service rates that are significantly lower than the national urban average.” FCC officials told us they plan to determine how much carrier revenue would increase if the rural rates increased to the urban rate average. However, because FCC does not include carrier revenues in determining USF support payments for the carriers, FCC will allow carriers that subsequently raise their rates to the national urban average to receive the support payments they were initially denied when their rates were below the specified floor. As a result, FCC’s incentive mechanism to raise rural rates will not result in any reduction in the amount consumers are charged for universal service. Members of the National Association of State Utility Consumer Advocates (NASUCA) we contacted expressed concern that the level of USF support payments is not tied to a carrier’s rates and revenues. They explained that carriers’ revenues come from services other than basic local service, but all of those services are carried over the networks to which consumers have contributed for years through the USF. These revenues are not included in the determination of USF payments that the carriers will receive. In addition, of the six economists we interviewed who are knowledgeable about how universal service support payments are determined, four explicitly mentioned revenues as one of the factors that should be taken into account for modeling the level of support that carriers receive. In 2007, the Joint Board adopted as a basic principle that USF should exist within a limited budget and made several recommendations to help FCC do so, including considering a carrier’s revenues when calculating its need for USF support. Controlling the growth of the high-cost fund could help FCC achieve its goal of minimizing the universal service contribution burden on consumers and businesses. Similar to the points raised by NASUCA and four of the economists we contacted, the Joint Board believed in 2007 that if broadband was to become a funded universal service, then the mechanisms used to calculate support payments should be revised to take into account the carriers’ net profits from selling broadband to wireline customers. The Joint Board noted that such profits should be measured and used to offset some of the carriers’ claims for explicit USF support. However, in 2008, FCC declined to implement the Joint Board’s recommendation related to considering carrier revenues when calculating support payments. According to the Joint Board, FCC did not address why the Joint Board’s recommendation had not been adopted. Under the USF Transformation Order, FCC will consider a carrier’s revenue when determining support payments under certain circumstances. In particular, for those carriers that petition for a waiver to exempt the carrier from some or all USF reforms, FCC intends to subject such requests to a rigorous, thorough, and searching review comparable to a total company earnings review. In those cases, FCC intends to take into account not only all revenues derived from network facilities that are supported by universal service, but also revenues derived from unregulated and unsupported services as well. As we noted previously, under the USF Transformation Order, FCC is developing a new model to revise its method for calculating carrier support, since FCC recognized that the prior method of distributing high-cost funds needed to be changed to accelerate broadband deployment in unserved areas. However, FCC has not stated what factors, such as carrier revenues, will be included in the model. FCC has undertaken the difficult task of reforming the high-cost program to make it more efficient and thus able to support both voice and broadband services. In the USF Transformation Order, FCC said it would control the size of USF as it transitions to support broadband and adopted new rules to make the fund more efficient as a way to minimize the universal service contribution burden on consumers and businesses. As FCC looks to broaden the scope of the high-cost program by providing support for broadband capable networks, it is therefore important for FCC to ensure that the limited program funds are used as effectively and efficiently as possible to stem further growth in the fund. Historically, FCC has not collected data at the level economists agree is necessary to determine the overall effectiveness of the high-cost program or demonstrate that the program increased telephone subscribership beyond the level that would have been achieved if there were no subsidy. Rather, FCC has assumed that the subsidies going to carriers were positively affecting subscribership even though it collected no empirical data to support that conclusion. In the USF Transformation Order, FCC instituted performance goals and measures with the intention of ensuring that the reforms achieve their intended purpose, and will require those carriers receiving support from the Connect America Fund to submit additional information. However, FCC has no specific data-analysis plan for the carrier data it will collect. Such analysis could enable FCC to correlate the amount of money spent with the increase in broadband access in specific areas and thus help FCC to determine the effectiveness of Connect America Fund expenditures. Lacking such analysis, the program’s achievements and overall effectiveness are less likely to be clear and Congress and FCC might not have the information necessary to make informed decisions about the program’s future. According to statute, urban and rural telecommunication rates should be reasonably comparable, but many rural consumers, whose rates are supported through the high-cost fund, pay rates that are lower than many urban consumers. FCC has stated that it is not equitable for all consumers to subsidize the cost of service for some consumers who pay local service rates that are significantly lower than the national average. In addition, given the way the high-cost program is funded, it is possible that poor urban consumers are subsidizing wealthy rural consumers. To provide an incentive for carriers to raise rates in rural areas, FCC plans to penalize carriers with rates that are too low by reducing the amount of high-cost support they can receive. While this action should help rural and urban rates become more comparable, it will not prevent consumers from subsidizing the cost of service for those areas where rates are too low because FCC will continue to allow carriers to receive the same amount of subsidy once their rates are raised to the urban mean. Therefore, although FCC would like to prevent consumers from subsidizing carriers whose rates for basic local service are artificially low, its incentive mechanism to raise rural rates will not reduce the financial burden placed on all consumers as there is currently no connection between the amount of support payments a carrier receives and the revenue a carrier earns, through rates or any other source. In addition to voicing concern for the potential inequity of rural rates that are lower than urban rates, FCC has a stated goal to minimize the universal service contribution burden on consumers and businesses. The National Broadband Plan recommended that FCC keep the overall size of the fund close to its 2010 funding level, and the Joint Board has stated its strong commitment to limit the size of the fund. As a way to control the size of the fund, the Joint Board recommended that FCC consider a carrier’s revenues when calculating its need for USF support but FCC declined to implement this recommendation. Under the USF Transformation Order, FCC has the opportunity to revisit this issue as it develops a new model to determine the amount of support a carrier should receive, however it has not stated what factors will be included in the model. FCC should take the following two actions: To determine the overall effectiveness of the Connect America Fund as well as improve the oversight and transparency of the high-cost program, establish a specific data-analysis plan for the carrier data and make the information publicly available. To help minimize the universal service contribution burden on consumers and businesses, as FCC examines and revises the manner in which carrier support payments are calculated, consult with the Joint Board and/or make appropriate referrals to determine what factors, such as carrier revenues, should be considered in the calculation. We provided a draft of this report to FCC for its review and comment. In response, FCC stated that our recommendations were valuable and noted that it has taken steps to address the oversight and management challenges we previously identified. Specifically, FCC noted that in the USF Transformation Order, FCC has adopted performance goals, set forth requirements to provide voice and broadband service to all Americans, and established a national framework to ensure that recipients who benefit from public investment in their networks have clearly defined public interest obligations and reporting requirements. FCC’s written response also included information to further clarify the actions that are currently under way related to the USF Transformation Order. With respect to our first recommendation, FCC agreed that it should establish a specific plan to analyze the data reported by the carriers as a way to improve oversight of the program, and noted it is planning to build on measures adopted in the USF Transformation Order to improve the effectiveness of the new program. Related to our second recommendation, FCC agreed that revenues derived from infrastructure supported by universal service are an important consideration when determining support provided to carriers, and FCC appreciated our suggestion that it work with the Joint Board to implement the reforms in the USF Transformation Order. FCC’s written comments are reprinted in appendix II. FCC provided technical comments on the draft report that we incorporated as appropriate. We are sending copies of this report to the Chairman of FCC and appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Contact information and major contributors to this report are listed on appendix IV. This report examines the Federal Communications Commission’s (FCC) plans to refocus and expand the high-cost program of the Universal Service Fund (USF) to provide support for broadband-capable networks. In particular, the report provides information on (1) FCC’s plans for repurposing the USF high-cost program for broadband services and (2) how FCC is planning to address previously identified oversight and management challenges as it broadens the scope of the program. To understand FCC’s plans for repurposing the high-cost program for broadband service, we reviewed and analyzed FCC’s USF Transformation Order and associated stakeholder comments. We interviewed officials from FCC, the Universal Service Administrative Company (USAC), and the National Exchange Carrier Association (NECA) on the rule changes outlined in the order and other actions that FCC has taken to repurpose USF to support broadband services in addition to voice services. We analyzed and assessed the previous and planned high-cost program structure and method of distributing funds. We also reviewed and analyzed telecommunications stakeholders’ filings and studies on the potential impact of FCC’s planned changes to the existing high-cost program. We limited the scope of our review to the USF high- cost program because in the USF Transformation Order, FCC focused on repurposing the high-cost program to support broadband. Although FCC made changes to intercarrier compensation in the USF Transformation Order, we did not review FCC’s reform efforts related to intercarrier compensation. Intercarrier compensation refers to the charges that one carrier pays to another carrier to originate, transport, and/or terminate telecommunications traffic. The intercarrier compensation regimes are governed by a complex and different system of federal and state rules than those of universal services; therefore, we decided not to review intercarrier compensation. To determine how FCC is planning to address previously identified oversight and management challenges as it broadens the scope of the program, we reviewed our past reports, documents from the Office of Management and Budget and FCC’s Office of Inspector General, and academic literature related to the high-cost program of USF. We met with telecommunications stakeholders, including associations representing consumers, small and large telecommunications carriers, and state regulatory commissions, to obtain their views on FCC’s management of and the changes made to the high-cost program. We identified industry stakeholders based on prior published literature, including filings with FCC, and other stakeholders’ recommendations. We also conducted semi-structured interviews with economists from academia and the telecommunications industry, recognized for their thorough knowledge of universal service. The economists we spoke with were selected based on studies focused on the high-cost program of USF, published within the last 5 years, and recommendations from telecommunications industry stakeholders, including associations representing telecommunications carriers, consumers, and state regulatory commissions. See table 4 for the stakeholders and economists we contacted. We conducted this performance audit from September 2011 to July 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In early 2009, Congress directed the Federal Communications Commission (FCC) to develop a broadband plan to ensure every American has “access to broadband capability” and to report annually on the state of broadband availability. In March 2010, an FCC task force issued the National Broadband Plan, which provided a road map for FCC to reform Universal Service Fund (USF) and the high-cost program, in particular. The National Broadband Plan made 11 recommendations as it relates to universal service. FCC has implemented or partially implemented 3 and is planning to implement the remaining 8 recommendations. Table 5 provides information on actions FCC has taken to enact the selected recommendations made in the National Broadband Plan. In the USF Transformation Order, released in November 2011, FCC took action to realize the overarching goal of the National Broadband Plan to make affordable broadband service available to all Americans. In particular, FCC adopted a number of actions designed to transition universal service funds from supporting only voice service to supporting networks that can provide both voice and broadband services. Table 6 displays FCC’s timeline for making this transition. In addition to the contact named above, Sally Moino, Assistant Director; Pedro Almoguera; Colin Fallon; David Hooper; Jennifer Kim; Andrew Stavisky; and Nancy Zearfoss made key contributions to this report. Telecommunications: FCC’s Performance Management Weaknesses Could Jeopardize Proposed Reforms of the Rural Health Care Program. GAO-11-27. Washington, D.C.: November 17, 2010. Telecommunications: Improved Management Can Enhance FCC Decision Making for the Universal Service Fund Low-Income Program. GAO-11-11. Washington, D.C.: October 28, 2010. Telecommunications: FCC Should Assess the Design of the E-rate Program’s Internal Control Structure. GAO-10-908. Washington, D.C.: September 29, 2010. Telecommunications: Long-Term Strategic Vision Would Help Ensure Targeting of E-rate Funds to Highest-Priority Uses. GAO-09-253. Washington, D.C.: March 27, 2009. Telecommunications: FCC Needs to Improve Performance Management and Strengthen Oversight of the High-Cost Program. GAO-08-633. Washington, D.C.: June 13, 2008.
The high-cost program within the Universal Service Fund (USF) provides subsidies to telecommunications carriers that serve rural and other remote areas with high costs of providing telephone service. The annual program cost has grown from $2.6 billion in 2001 to over $4 billion in 2011, primarily funded through fees added to consumers’ phone bills. The program is managed by the Federal Communications Commission (FCC), which noted that providing universal access to broadband is “the universal service challenge of our time.” Accordingly, FCC made changes to the program to make funds available to support both telephone and broadband. GAO previously reported that using USF monies for broadband could cause the size of the fund to greatly expand unless FCC improved its management and oversight to ensure the program’s cost-effectiveness. This requested report examines FCC’s (1) plans for repurposing the high-cost program for broadband, and (2) plans to address previously identified management challenges as it broadens the program’s scope. GAO reviewed and analyzed pertinent FCC orders, associated stakeholder comments, and reports related to USF and interviewed federal and industry stakeholders, as well as economists and experts. Under the USF Transformation Order, FCC adopted new rules to fundamentally change the high-cost program by extending the program to support broadband capable networks. For example, FCC established a $4.5-billion annual program budget for the next 6 years, created new funds—called the Connect America Fund and the Mobility Fund—that will support broadband deployment, and established public interest obligations for the carriers as a condition of receiving funds. Specifically, FCC will require carriers to offer broadband services in their supported service areas, meet certain broadband performance requirements, and report regularly on associated broadband performance measures. FCC also changed its method for distributing funds to carriers to address some of the recognized inefficiencies with the program. According to FCC, these changes will allow it to reduce high-cost support for carriers providing only voice services and make funds available to carriers to offer both voice and broadband services. FCC has taken several steps to address previously identified oversight and management challenges that GAO and the Office of Management and Budget (OMB) have raised in the last 7 years, but issues remain. Management challenges identified by GAO included a lack of performance goals and measures for the program and weak internal controls, while OMB criticized FCC’s inability to base funding decisions on measurable benefits. In response, FCC established performance goals and measures for the high-cost program and improved internal control mechanisms over the fund. While these are noteworthy actions, GAO identified gaps in FCC’s plans to better oversee the program and make it more effective and efficient. In particular, FCC has not addressed its inability to determine the effect of the fund and lacks a specific data-analysis plan for carrier data it will collect. Such analysis would enable FCC to adjust the size of the Connect America Fund based on data-driven evaluation and would allow Congress and FCC to make better informed decisions about the program’s future and how program efficiency could be improved. GAO also found that FCC lacks a mechanism to link carrier rates and revenues with support payments. A requirement in statute is for rates for telecommunications services to be reasonably comparable in rural and urban areas, but FCC has noted that some rural carriers are offering basic local rates for telephone services that are lower than the average basic rate paid by urban consumers. FCC has stated that it is not equitable for all consumers to subsidize the cost of service for some consumers who pay local service rates that are significantly lower than the national average and has therefore instituted an incentive mechanism for carriers to increase artificially low consumer rates. Although FCC would like to prevent consumers from subsidizing carriers that offer service at artificially low rates, its incentive mechanism to raise rural rates will not reduce the financial burden placed on all consumers as there is currently no connection between the support payments a carrier receives and the carrier’s rates and revenues. The Federal-State Joint Board on Universal Service recommended that FCC consider a carrier’s revenues when calculating its need for support payments, but in the past, FCC declined to implement this recommendation. FCC is developing a new model to calculate carrier support, but has not stated what factors will be included. FCC should (1) establish a specific data-analysis plan for carrier data to determine program effectiveness, and (2) consult with the Joint Board as it examines the factors for calculating carrier support payments. FCC concurred with the recommendations and provided technical comments.
As an agency of the Department of Transportation, FAA’s mission is to promote the safe, orderly, and expeditious flow of air traffic in the national airspace. To fulfill its mission requires the extensive use of technology. The achievement of the agency’s mission is also dependent in large part on the skills and expertise of its workforce. Its workforce of nearly 50,000 people provides aviation services that include air traffic control; maintenance of air traffic control equipment; and certification of aircraft, airline operations, and pilots. FAA is organized into several staff support offices (examples include the Office of Information Services and the Office of Human Resource Management) and five lines of business, which include Airports, Regulation and Certification, Commercial Space Transportation, the Office of Security and Hazardous Materials, and the newly formed Air Traffic Organization (ATO). The ATO was formed on February 8, 2004, to better provide safe, secure, and cost-effective air traffic services now and into the future. The Air Traffic Services and the Research and Acquisitions units, which had been primarily responsible for managing air traffic services within FAA, were combined into one performance-based organization to create ATO. ATO is led by FAA’s Chief Operating Officer and consists of 10 service units. FAA relies extensively on information technology to carry out its NAS operations. It constantly depends on the adequacy and reliability of the nation’s ATC system, which comprises a vast network of radars; automated data processing, navigation, and communications equipment; and ATC facilities. Through this system, FAA provides services such as controlling takeoffs and landings and managing the flow of traffic between airports. For example, the Integrated Terminal Weather System is employed to allow maximum use of airport runways in all kinds of weather through a variety of weather sensors. The Wide Area Augmentation System is used to provide vertically guided landing to aircraft at thousands of airports and airstrips where there is currently no vertically guided landing capability. FAA also relies on IT to carry out its mission-support and administrative operations (non-NAS operations). For example, FAA uses IT to support accident and incident investigations, security inspections, and personnel and payroll functions. With an IT budget of about $2.5 billion for fiscal year 2004, FAA accounts for over 90 percent of the Department of Transportation’s IT budget. The amount of investments in both NAS and non-NAS IT is shown in the table 1 below. In 1995, we designated FAA’s modernization of its air traffic control system, the principle technology component of the NAS, as a high-risk area because of the size and complexity of the program and FAA’s many failures in meeting projects’ cost, schedule, and performance goals. In our latest High- Risk Series, issued in January 2003, we addressed the critical need for FAA to continue to improve its investment management practices—the management processes the agency uses to select, control and evaluate the benefits realized from its IT spending—because the agency would be spending nearly $16 billion more through FY 2007, after having already spent $35 billion since 1981. Other reports have also noted weaknesses in FAA’s IT investment management processes and have made a number of recommendations to address this area. For instance, last year we reported that while FAA had improved its processes, several issues remained unresolved. We noted, for example, that the agency had not yet implemented processes for evaluating projects after implementing them, in order to identify lessons learned and improve the investment management process. FAA’s process for managing an IT investment varies depending on the type of investment—NAS systems in development through the second year of operation (F&E), NAS systems in operation after the second year (OPS), and non-NAS systems each follow different processes. NAS investments are managed through a standardized process, the FAA Acquisition Management System (AMS), and non-NAS investments are managed through a number of different processes. In April 1996, FAA implemented its AMS in response to legislation that directed the agency to develop a new acquisition management system. Because of FAA’s contention that some of its modernization problems were caused by federal acquisition regulations, the Congress enacted legislation in November 1995 that exempted the agency from most federal procurement laws and regulations and directed FAA to develop and implement a new acquisition management system that would address the unique needs of the agency. AMS was intended to reduce the time and cost for fielding new products and services by introducing (1) a new investment management system that spans the entire life cycle of an acquisition, (2) a new procurement system that provides flexibility in selecting and managing contractors, and (3) organizational and human capital reforms that support the new investment and procurement systems. AMS provides high-level acquisition policy and guidance for selecting and controlling FAA’s NAS investments through all phases of the acquisition life cycle, which is organized into a series of phases and decision points that include (1) mission analysis, (2) investment analysis, (3) solution implementation, and (4) in-service management. To select investments, FAA has established two processes—mission analysis and investment analysis—which together constitute a set of policies, procedures, and guidance that enhance the agency’s ability to screen projects that are submitted for funding. Also, through these two processes FAA is to assess and rank each project based on its relative costs, benefits, risks, and contribution to FAA’s mission, and a senior, corporate-level decision- making group selects projects for funding. After a project has been selected, FAA officials are required to formally establish the life cycle cost, schedule, benefits, and performance baselines that are used to monitor the project’s status throughout the remaining phases of the acquisition management life cycle. See figure 1 for a graphic depiction of FAA’s life cycle management process. Several groups are involved in managing FAA’s NAS investments; they perform functions from analysis of mission needs and alternative investments through system development, implementation, operation, and, ultimately, disposal. The roles and responsibilities of each group are described below: Joint Resources Council (JRC)—This board makes corporate-level resource and investment decisions and establishes investment programs. Members include Associate Administrators representing FAA’s lines of business, the FAA Acquisition Executive, the Chief Financial Officer, the Chief Information Officer (CIO), and the Assistant Administrators for System Safety, for Policy, Planning and International Aviation, and for Region and Center Operations. The board is supported by the JRC Secretariat Team, a group that facilitates the board’s processes by maintaining the meeting calendar and guidance documents, developing records of decisions, and providing advisory and liaison support to programs. Systems Engineering/Operational Analysis Team—This team performs affordability assessments for newly proposed investments and prepares recommendations for the reprogramming of funds from lower priority programs. It also prepares annual budget submissions for approval by the JRC. This team is composed of representatives from each line of business and from other functional disciplines and is chaired by the Director, System Architecture and Investment Analysis. Investment Analysis Team (IAT)—This team is assembled for a relatively short period for each specific investment being considered, to conduct the detailed analysis of alternatives that will lead to selecting and recommending a preferred acquisition solution. The team draws experts from the integrated product teams, the organizational unit with the need, the investment analysis staff, and other organizations. Corporate Mission Analysis Organization—Performs agency-level mission analysis and coordinates service area analysis, an activity that is conducted during mission analysis to (1) identify capability shortfalls for or in conjunction with service organizations, (2) ensure alignment with agency strategic goals, and (3) eliminate redundant activity, duplicate benefits, service gaps, and service overlaps. It also develops and maintains standards and tools for conducting service area analysis, and it assists service organizations in establishing a service area analysis capability. In addition to identifying the roles and responsibilities of the groups involved in the management process, AMS provides guidance on the documents and decisions that result from each of the life cycle phases. For example, through the mission analysis phase, FAA identifies critical needs that the agency must meet for improving the safety, capacity, efficiency, and effectiveness of the NAS. Approval of a mission need statement by the JRC signifies that the agency agrees that the need is critical enough to proceed to the next phase—investment analysis. During the investment analysis phase, the IAT is to analyze and recommend a solution that best satisfies FAA’s performance goals and customer service needs. This team is then to rank each proposed project based on a number of factors, including how well it meets mission needs compared to other projects and whether it has a favorable cost-benefit ratio. As part of the JRC selection process, the life cycle cost, schedule, benefits, and performance baselines are established in a formal document called the acquisition program baseline (APB), which is designed to be used by program offices to monitor a project’s status in achieving those baselines throughout the remaining phases of the acquisition management life cycle. The solution implementation phase begins when the JRC approves and funds a project, establishes its acquisition program baseline, and authorizes the service organizations to implement and manage the project over its life cycle. After the project has been implemented and is in operation (FAA’s in- service management phase), the service organizations monitor and assess operational performance. Also during this phase, the project is monitored to determine whether the current capability satisfies the demand for services or whether another solution offers the potential for improving safety or effectiveness or for significantly lowering costs. If the current capability is lacking, FAA initiates a process whereby the mission need would be revalidated and the investment analysis process begun again, possibly leading to a new investment decision. Figure 2 provides detail on the phases of FAA’s IT investment management process and decision points. The highlighted decision points represent those for which the JRC must make an approval decision before a project can move forward. Senior executives have stated that with the reorganization of the ATO in February 2004, discussions have been held about realigning the investment management process to make the heads of the service units responsible and accountable for managing programs’ capital investments and operating costs from inception to retirement. In the past, the business units have been organized to manage either capital investments or operating costs, but not both. These discussions have not yet led to specific changes in FAA’s investment management processes and responsibilities. While the AMS was intended to apply to all FAA investment programs, it has not been implemented for non-NAS investments. Each of the agency’s business line and staff offices that manage non-NAS investments has implemented its own processes for managing these investments. Examples of these various non-NAS investment processes include the following: Regarding an investment management board structure, the Financial Services staff office has an informal board consisting of the Chief Financial Officer, Deputy Chief Financial Officer, and heads of offices within Financial Services. The Financial Services life cycle process guide directs the board’s operations. In the Regulation and Certification unit, the senior management team makes investment management decisions with input from the Chief Information management team. This unit is developing an IT investment management processes guide, which is expected to be completed by the end of the fiscal year. When selecting investments, the Human Resource Management unit uses its established annual budget formulation process, while the Region and Center Operations unit is moving toward a new process whereby in order to be selected investments need to demonstrate, at a minimum, that they (1) are compliant with FAA’s architecture, (2) have a business sponsor, (3) have a solid business case, and (4) can be funded. In controlling investments, Information Services has developed processes to monitor contract expenditures, and unit managers regularly perform financial management reviews of the programs under their purview, but there is no structured process for oversight of projects’ performance against expectations. In the Human Resource Management unit, division managers hold quarterly reviews to assess projects’ progress in meeting cost and schedule expectations and aligning with strategic goals. Descriptions of the processes used by each of the units responsible for managing non-NAS investments can be found in appendix II. In January 2004, the FAA Administrator established the Information Technology Executive Board (ITEB) to “strengthen FAA’s ability to use IT as an agencywide strategic asset” and “guide fundamental changes in the governance of IT assets.” Its charter calls for the ITEB to assume responsibility for making investment decisions about non-NAS IT investments. However, the ITEB has not yet implemented this aspect of its charter. Therefore, at the current time there is no single board or investment management process for non-NAS investments that would be analogous to the JRC board and AMS process that are used for NAS investments. The ITIM framework is a maturity model composed of five progressive stages of maturity that an agency can achieve in its investment management capabilities. It was developed on the basis of our research into the IT investment management practices of leading private- and public- sector organizations. The framework identifies critical processes for making successful IT investments, organized into the five increasingly mature stages. These maturity stages are cumulative; that is, in order to attain a higher stage of maturity, the agency must have institutionalized all of the requirements for all of the lower stages, in addition to those for the higher stage. The ITIM can be used both to assess the maturity of an agency’s investment management processes and as a tool for organizational improvement. The overriding purpose of the framework is to encourage investment processes that increase business value and mission performance, reduce risk, and increase accountability and transparency in the decision process. We have used the framework in several of our evaluations, and a number of agencies have adopted it. These agencies have used ITIM for purposes ranging from self-assessment to redesign of their IT investment management processes. ITIM’s five maturity stages represent steps toward achieving stable and mature processes for managing IT investments. Each stage builds on the lower stages; the successful attainment of each stage leads to improvement in the organization’s ability to manage its investments. With the exception of the first stage, each maturity stage is composed of “critical processes” that must be implemented and institutionalized in order for the organization to achieve that stage. These critical processes are further broken down into key practices that describe the types of activities that an organization should be performing to successfully implement each critical process. An organization may be performing key practices from more than one maturity stage at the same time. This is not unusual, but efforts to improve investment management capabilities should focus on becoming compliant with lower-stage practices before addressing higher-stage practices. Stage 2 of the ITIM framework encompasses building a sound investment management process by establishing basic capabilities for selecting new IT projects. It also involves developing the capability to control projects so that they finish predictably within established cost and schedule expectations and the capability to identify potential exposures to risk and put in place strategies to mitigate that risk. The basic selection processes established in Stage 2 lays the foundation for more mature selection capabilities in Stage 3. Stage 3 requires that an organization continually assess both proposed and ongoing projects as parts of a complete investment portfolio—an integrated and competing set of investment options. It focuses on establishing a consistent, well-defined perspective on the IT investment portfolio and maintaining mature, integrated selection (and reselection), control, and evaluation processes, which are to be evaluated during postimplementation reviews (PIR). This portfolio perspective allows decision makers to consider the interaction among investments and the contributions to organizational mission goals and strategies that could be made by alternative portfolio selections, rather than relying exclusively on the balance between the costs and benefits of individual investments. Stages 4 and 5 require the use of evaluation techniques to continuously improve both the investment portfolio and investment processes in order to better achieve strategic outcomes. At Stage 4 maturity an organization has the capacity to conduct IT succession activities and therefore can plan and implement the deselection of obsolete, high-risk, or low-value IT investments. An organization with Stage 5 maturity conducts proactive monitoring for breakthrough information technologies that will enable it to change and improve its business performance. Organizations implementing Stages 2 and 3 have in place the selection, control, and evaluation processes that are required by the Clinger-Cohen Act. Stages 4 and 5 define key attributes that are associated with the most capable organizations. Figure 3 shows the five maturity stages and the critical processes associated with each. As defined by the model, each critical process consists of “key practices” that must be executed to implement the critical process. In order to have the capabilities to effectively manage IT investments, an agency should, at a minimum, (1) build an investment foundation by putting basic, project-level control and selection practices in place (Stage 2 capabilities) and (2) manage its projects as a portfolio of investments, treating them as an integrated package of competing investment options and pursuing those that best meet the strategic goals, objectives, and mission of the agency; and it should also conduct PIRs to maintain mature, integrated selection, control, and evaluation processes (Stage 3 capabilities). In addition, an agency would be well served by implementing capabilities for improving its investment process through performance evaluations of its portfolio and succession management of current investments (Stage 4 capabilities). In order to develop the capabilities to effectively manage its investments, FAA would, at minimum, need to implement Stage 2 capabilities for both its NAS and non-NAS investments and Stage 3 capabilities for its portfolio of investments. FAA’s investment management capabilities vary depending on whether an investment is considered to be NAS or non-NAS. Specifically: For NAS investments, FAA has executed 30 of the 38 Stage 2 key practices that are required to establish a foundation for investment management maturity. For these investments, the agency has in place a strong set of processes to support investment management, although the JRC does not regularly review investments that have passed into the in-service management phase (i.e., operational systems). For its non-NAS investments, the agency has not yet adequately implemented a single management line of responsibility and the standard processes needed to manage in a consistent manner. Although some structured processes exist within individual business units, this lack of consistency undermines the agency’s maturity. In Stage 3, the lack of regular JRC oversight of operational systems and the absence of a structured approach to managing non-NAS investments prevent FAA from managing its investments as a portfolio that includes all major NAS and non-NAS investments. In addition, the agency is not conducting PIRs on its major investments. FAA has not executed any of the Stage 4 key practices for managing the succession of its information systems, although the agency has begun to address this weakness by defining procedures for retiring investments in the AMS. When FAA implements all of the key practices associated with building the investment foundation and managing its investments as a portfolio, the agency will have greater assurance that it has selected the mix of investments that best supports its strategic goals and that it will be able to manage the investments to successful completion. At the ITIM Stage 2 level of maturity, an organization has attained repeatable, successful IT project-level investment control processes and basic selection processes. Through these processes, the organization can identify expectation gaps early and take appropriate steps to address them. According to ITIM, critical processes at Stage 2 include (1) defining IT investment board operations, (2) identifying the business needs for each IT investment, (3) developing a basic process for selecting new IT proposals and reselecting ongoing investments, (4) developing project- level investment control processes, and (5) collecting information about existing investments. Table 2 describes the purpose of each of the Stage 2 critical processes. To its credit, FAA has put in place about 80 percent of the key practices associated with managing its NAS investments through the Stage 2 critical processes. The agency has satisfied all of the key practices associated with capturing investment information and most of those associated with instituting the investment board, meeting business needs, selecting an investment, and providing investment oversight. Most of the weaknesses in these critical processes relate to NAS investments in the in-service management phase. Table 3 summarizes the status of FAA’s critical processes for Stage 2, showing how many key practices FAA has executed in managing its NAS investments. The establishment of decision-making bodies or boards is a key component of the IT investment management process. At the Stage 2 level of maturity, organizations define one or more boards, provide resources to support their operations, and appoint members who have expertise in both operational and technical aspects of proposed investments. The boards operate according to a written IT investment process guide that is tailored to the organization’s unique characteristics, thus ensuring that consistent and effective management practices are implemented across the organization. Once board members are selected, the organization ensures that they are knowledgeable about policies and procedures for managing investments. Organizations at the Stage 2 level of maturity also take steps to ensure that executives and line managers support and carry out the decisions of the IT investment board. According to ITIM, an IT investment management process guide should be a key authoritative document that the organization uses to initiate and manage IT investment processes and should provide a comprehensive foundation for the policies and procedures that are developed for all of the other related processes. (The complete list of key practices is provided in table 4.) FAA has executed 7 of the 8 key practices for this critical process. For example, in 1996, Congress directed FAA to develop a new acquisition management system as part of a broad mandate for acquisition reform at the agency. In response, FAA implemented AMS in April 1996. AMS establishes policy and guidance for all aspects of the agency’s acquisition life cycle and documents the investment management process used for NAS investments. The agency established the JRC as its corporate-level investment board for the NAS investments. The JRC makes select and control decisions, including corporate decisions on mission needs, acquisition investments, and acquisition program baseline changes; it also reviews and recommends approval of the agency’s F&E budget submission. The board is adequately resourced to support its operations. The JRC Secretariat Team supports the board in such ways as developing and updating guidance, scheduling meetings, and preparing and executing the JRC readiness process. In addition, the Mission Analysis Steering Group is responsible for assisting the board in prioritizing mission needs, while the Systems Engineering/Operational Analysis Team is to assist in addressing budget issues among investments. The JRC consists of senior officials from both business and IT areas, including the Chief Information Officer and the associate administrators representing FAA lines of business. These members are to exhibit the core competencies required by FAA in selecting executives and in assessing executive training needs. In addition, the agency offers a 3-day AMS overview course for all employees, including JRC members. Although the board as an entity does not oversee the development and maintenance of AMS, it is involved through FAA’s Acquisition System Advisory Group, which evaluates all proposed changes to AMS. To ensure that the board’s decisions are carried out, an acquisition program baseline document is approved at the JRC final investment decision point; this document identifies the capabilities, benefits, costs, and schedule for the approved investment, which are monitored by FAA through its variance reporting process. Despite these strengths, FAA has not yet clearly defined the relationship between the JRC and the newly formed ITEB. Although the ITEB was established by the Administrator to function as the central authority responsible for assuring that FAA IT investments are based on sound business practices, FAA has not yet clearly delineated the specific roles the ITEB is to play and the relationship it will have with the JRC. This task has been assigned to the ITEB as a longer-range initiative. Table 4 shows the rating for each key practice required to implement the critical process for instituting the investment board at the Stage 2 level of maturity. Each of the “Executed” ratings shown below represents instances where, based on the evidence provided by FAA officials, we concluded that the specific key practices were executed by the organization. Defining business needs for each IT project helps to ensure that projects and systems support the organization’s business needs and meet users’ needs. This critical process ensures that a link exists between the organization’s business objectives and its IT management strategy. According to ITIM, effectively meeting business needs requires, among other things, (1) documenting business needs with stated goals and objectives, (2) identifying specific users and other beneficiaries of IT projects and systems, (3) providing adequate resources to ensure that projects and systems support the organization’s business needs and meet users’ needs, and (4) periodically evaluating the alignment of IT projects and systems with the organization’s strategic goals and objectives. (The complete list of key practices is provided in table 5.) FAA has in place 6 of the 7 key practices for meeting business needs. The agency’s AMS and mission analysis guidance calls for business needs for both proposed and ongoing IT projects and systems to be identified in the mission need statement developed during the mission analysis phase. FAA also has detailed procedures for developing this document that call for identifying business needs. Resources for ensuring that IT projects and systems support the organization’s business needs and meet users’ needs include service organizations, the Corporate Mission Analysis Organization, the Mission Analysis Steering Group, and detailed procedures and associated templates for developing mission need statements. FAA’s specific business mission, with stated goals and objectives, is defined in the Federal Aviation Administration Flight Plan for fiscal years 2004 through 2008. Further, FAA defines and documents business needs for both proposed and ongoing IT projects and identifies users and other beneficiaries during its mission analysis activities. In addition, the AMS policy calls for users to participate in project management throughout the FAA life cycle management process. For the three projects we reviewed, we verified that business needs and specific users and other beneficiaries were identified and documented in mission needs statements as well as in other documents. In addition, users are involved in project management throughout the life cycle of the projects. For example, according to project officials, En Route Communications Gateway (ECG) users participate in project meetings, weekly integrated product team status meetings, and monthly En Route domain national deployment teleconferences. FAA Telecommunications Infrastructure’s (FTI) end users are heavily involved in the “operational test” period, which determines whether the equipment can be safely implemented in NAS. VSCS Control Subsystem Upgrade users are involved in the project’s life cycle via a Web site through which they review and comment on project documentation. Despite these strengths, the JRC has no process for evaluating the organizational alignment of NAS systems through most of their in-service management phase (and non-NAS investments, which are described separately in this report). While the JRC does evaluate the alignment of projects and systems with organizational goals throughout the systems’ development and 2 years into their operations as part of the annual budget formulation process, it does not use any consistent process to review projects and systems after that point in their life cycles. For NAS systems in the in-service management phase, these activities are carried out within the business unit that owns the system, but the JRC does not regularly oversee these processes and may go for several years without reviewing a system’s alignment with organizational goals. In-service NAS systems only return to the JRC if they are judged to require additional funds for correction. Until FAA establishes a process for periodic evaluation of systems throughout the in-service management phase and takes corrective actions when misalignment occurs, the agency will not be able to ensure that these projects, totaling about $1.3 billion per year, are still continuing to maintain alignment with the FAA’s strategic plans and its business goals and objectives. Table 5 shows the rating for each key practice required to implement the critical process for meeting business needs at the Stage 2 level of maturity and summarizes the evidence that supports these ratings. Selecting new IT proposals and reselecting ongoing investments requires a well-defined and disciplined process to provide the agency’s investment board, business units, and developers with a common understanding of the process and the cost, benefit, schedule, and risk criteria that will be used both to select new projects and to reselect ongoing projects for continued funding. According to ITIM, this critical process requires, among other things, (1) making funding decisions for new proposals according to an established process; (2) providing adequate resources for investment selection activities; (3) using a defined selection process to select new investments and reselect ongoing investments; (4) establishing criteria for analyzing, prioritizing, and selecting new IT investments and for reselecting ongoing investments; and (5) creating a process for ensuring that the criteria change as organizational objectives change. (The complete list of key practices is provided in table 6.) FAA has executed 7 of the 10 key practices associated with selecting an investment. For example, the AMS establishes two processes—mission analysis and investment analysis—that together constitute a set of policies and procedures, as well as guidance that is designed to enhance the agency’s ability to select investments. In addition, FAA has policies and procedures for its annual F&E budget formulation process to reselect ongoing IT projects. Also, FAA’s AMS sets forth policies and procedures for reselecting ongoing IT investments by identifying their capability shortfalls and addressing them as new investments. The AMS also integrates funding with the process of selecting an investment by requiring the Systems Engineering/Operational Analysis Team to perform affordability assessments for new proposed investment programs; it may recommend funding reallocations from lower priority programs when an alternative solution cannot be funded within FAA planning and budgeting baselines. This team also supports the JRC to ensure that the executives’ funding decisions are aligned with selection decisions during the investment analysis activities. Resources for proposal selection activities include the program director, the Integrated Product Team, and the Investment Analysis Team, as well as detailed procedures and a template that have been defined for developing investment analysis reports. The investment analysis reports identify the evaluation criteria used, the alternatives analyzed, and the ranking of each alternative so that the JRC can select the best overall solution identified in the mission need statement. The criteria that were established during the initial investment analysis phase are used by the Investment Analysis Team to rank each proposed project on the basis of how well it meets the agency’s mission needs compared with other projects. FAA uses the processes defined in the AMS for selecting new IT investments. In addition, it uses two processes to reselect ongoing IT investments. Specifically, the FAA uses its annual budget formulation process for projects in development or in the first 2 years of operations. It also uses the AMS process when a system’s capability shortfall is identified, and it treats the correction of the shortfall as a new investment. The managers of the three projects we reviewed confirmed that their projects were selected using the AMS process. One project’s officials stated that this included market, alternatives, investment, and affordability analyses. The program managers also stated that the annual F&E budget formulation process is used to reselect their projects. These project officials also noted that if a project is scheduled for a hardware replacement, a reselection is done. The AMS process is followed to explore new alternatives and make sure the replacement is in the best interest of the government. Despite these strengths, FAA has not developed similarly strong processes for NAS investments more than 2 years into their operations—those NAS systems that are in the in-service management phase. For example, while FAA’s F&E budget formulation process establishes criteria for analyzing, prioritizing, and reselecting IT investments for systems in development or up until 2 years into operations, neither of the two processes used to reselect IT investments has established criteria for investments beyond 2 years into operations. In addition, while FAA uses its annual budget formulation process to reselect projects that are part of the F&E budget, the agency does not have an analogous reselection process as part of its operations budget formulation. Until FAA establishes consistent criteria for reselecting all of its IT investments, it will not be adequately assured that it is consistently and objectively continuing to fund ongoing projects that still meet the needs and priorities of the agency in a cost-effective and risk- insured manner. Table 6 shows the rating for each key practice required to implement the critical process for selecting an investment at the Stage 2 level of maturity and summarizes the evidence that supports these ratings. An organization should provide effective oversight for its IT projects throughout all phases of their life cycles. Its investment board should maintain adequate oversight and observe each project’s performance and progress toward predefined cost and schedule expectations as well as each project’s anticipated benefits and risk exposure. The investment board should also employ early warning systems that enable it to take corrective action at the first sign of cost, schedule, or performance slippages. This board has ultimate responsibility for the activities within this critical process. According to ITIM, effective project oversight requires, among other things, (1) having written policies and procedures for management oversight; (2) developing and maintaining an approved management plan for each IT project; (3) making up-to-date cost and schedule data for each project available to the oversight boards; (4) having regular reviews by each investment board of each project’s performance against stated expectations; and (5) ensuring that corrective actions for each underperforming project are documented, agreed to, implemented, and tracked until the desired outcome is achieved. (The complete list of key practices is provided in table 7.) FAA has in place 4 of the 7 key practices associated with effective project oversight. The agency has developed written policies and procedures for management oversight of its investments. These include (1) AMS; (2) the integrated program plan, which is the detailed planning document for all aspects of a program’s implementation, including program control; and (3) the Integrated Baseline Establishment and Management Process document for reporting variances from the performance expectations approved by the JRC in the acquisition program baseline. We verified that cost, schedule, benefit, and risk expectations were documented in the acquisition program baseline and that the integrated program plan contained details for project execution for En Route Communications Gateway and FAA Telecommunications Infrastructure. For the VSCS Control Subsystem Upgrade, performance expectations and details on project execution were both captured in the integrated program plan. In addition, the JRC Secretariat Team maintains a tracking system for action items that are assigned during a project’s acquisition reviews, including the action to be taken, the responsible FAA organization, and whether the underlying problem has been resolved. FAA has not established processes that bring investments before the JRC for oversight on a regular basis. There is a process for reporting variances from the performance expectations that were approved by the JRC in the investment’s acquisition program baseline. However, although this process is carried out as part of the F&E budget formulation for IT investments in development or less than 2 years into operations, it is not being carried out for investments that are part of the operations budget. Investments that are meeting performance expectations may not return to the JRC for several years. FAA also conducts acquisition reviews as a means for program offices to report to agency executives on the status of investments compared to program baselines. However, since program offices may select which investments they wish to bring forward for review, many investments may never come forward. Until FAA develops (1) procedures for reporting on an investment throughout its entire acquisition life cycle and (2) mechanisms for ensuring that all investments are reviewed regularly, the agency is placing itself at risk that underperforming investments will not be reported to the JRC in order for it to take appropriate actions. Table 7 shows the rating for each key practice that is required to implement the critical process for project oversight at the Stage 2 level of maturity and summarizes the evidence that supports these ratings. To make good IT investment decisions, an organization must be able to acquire pertinent information about each investment and store that information in a retrievable format. During this critical process an organization identifies its IT assets and creates a comprehensive repository of investment information. This repository provides information to investment decision makers to help them evaluate the impacts and opportunities that would be created by proposed or continuing investments. It can provide insights and trends about major IT cost and management drivers. The repository can take many forms and does not have to be centrally located, but the collection method should identify each IT investment and its associated components. This critical process may be satisfied by the information contained in the organization’s current enterprise architecture, augmented by additional information—such as financial information and information on risk and benefits—that the investment board may require to ensure that informed decisions are being made. According to ITIM, effectively managing this repository requires, among other things, (1) developing written policies and procedures for identifying and collecting the information, (2) assigning responsibility for ensuring that the information being collected meets the needs of the investment management process, (3) identifying IT projects and systems and collecting relevant information to support decisions about them, and (4) making the information easily accessible to decision makers and others. (The complete list of key practices is provided in table 8.) FAA’s AMS guidance identifies specific information that is needed in the investment management process, including information for its investment analysis phase. FAA maintains a number of repositories of relevant information, including its Simplified Program Information Reporting & Evaluation database, which reports variances in cost, schedule, performance, or benefits from an investment’s approved acquisition program baseline. The information that is collected is made available to the JRC in several documents, including program plans and the acquisition program baseline document. The JRC Secretariat Team ensures that the investment board has all the relevant information it needs for its decision- making process. Table 8 shows the rating for each key practice required to implement the critical process for capturing investment information at the Stage 2 level of maturity and summarizes the evidence that supports these ratings. FAA does not have a single set of processes for making consistent basic selection and control decisions for its non-NAS investments (Stage 2 capabilities). As previously discussed in the background section of this report, several business units within FAA make decisions about non-NAS investments. We reviewed the investment management processes of seven of these units—Information Services, Region and Center Operations, Regulation and Certification, Financial Services, Research and Acquisition, Air Traffic Services, and Human Resource Management. Appendix II describes the investment management processes we found in these units. The extent to which these processes comply with the ITIM framework for Stage 2 varies considerably by business unit, and FAA currently does not specify non-NAS investment management processes in a coordinated manner. Since the ITIM framework calls for a consistent investment management process, we assessed FAA’s non-NAS investment management capability at an aggregate level. That is, we assessed FAA’s capability to manage its non-NAS investments, not the capability of each individual business unit. Even though individual business units may have some of these processes in place, FAA as a whole has not yet defined an investment management structure that allows the agency to consistently manage its non-NAS investments, a uniform process for ensuring that non-NAS investments are linked to business needs and meet users’ needs, a process for selecting new IT proposals and reselecting ongoing a single process for reviewing the progress of investments and taking corrective action when performance expectations are not being met, or a comprehensive inventory of project and system information to support investment decisions. According to FAA officials, the agency has not defined a coherent investment management structure and a set of processes for non-NAS investments in the past because many of these investments have not had the agencywide impact of the NAS investments. However, because there is now recognition that a disciplined approach to managing non-NAS investments could help control FAA’s IT assets and costs in general, efforts are currently under way to address this weakness. As previously discussed, an IT Executive Board (ITEB) has been chartered with responsibility for, among other things, making decisions about non-NAS IT investments, but it has not yet taken action on developing a standard process. Until FAA fully establishes the consistent practices it needs to make basic project selection and control decisions, executives will be hampered in their ability to effectively manage non-NAS investments and ultimately to find the opportunities to achieve the cost savings they are seeking. During Stage 3, the investment board enhances the IT investment management process by developing a complete investment portfolio and carrying out PIRs. An IT investment portfolio is an integrated, agencywide collection of investments that are assessed and managed collectively on the basis of common criteria. Managing investments within the context of such a portfolio is a conscious, continuous, and proactive approach to expending limited resources on an organization’s competing initiatives in light of the relative benefits expected from these investments. Taking an agencywide perspective enables an organization to consider its investments comprehensively, so that collectively the investments optimally address the organization’s missions, strategic goals, and objectives. Managing IT investments with a portfolio approach also allows an organization to determine priorities and make decisions about which projects to fund, and continue to fund, based on analyses of the relative organizational value and risks of all projects, including projects that are proposed, under development, and in operation. For an organization to reap the full benefits of the portfolio process, it should collect all of its investments into an enterprise-level portfolio that is overseen by its senior investment board. Although investments may initially be selected into subordinate portfolios—based on, for example, lines of business or life cycle stages—and managed by subordinate investment boards, they should ultimately be aggregated into this enterprise-level portfolio. The purpose of a PIR is to evaluate an investment after its development has been completed (i.e., after its transition from the implementation phase to the in-service management phase) in order to validate actual investment results. This review is conducted to (1) examine differences between estimated and actual investment costs and benefits and their possible ramifications for unplanned funding needs in the future and (2) extract “lessons learned” about the investment selection and control processes that can be used as the basis for management improvements. Similarly, PIRs should be conducted for investment projects that were terminated before completion, to help to readily identify potential management and process improvements. According to ITIM, critical processes performed by Stage 3 organizations include (1) defining the portfolio criteria, (2) creating the portfolio, (3) evaluating the portfolio, and (4) conducting PIRs. Table 9 shows the purpose of each critical process in Stage 3. FAA has executed only 1 of the 27 key practices associated with Stage 3 critical processes: it has a process for distributing portfolio criteria to project management personnel and other stakeholders. The remaining 26 key practices were not executed—primarily because FAA does not involve the JRC in the regular oversight of non-NAS investments or in NAS investments during their in-service management phase, weaknesses that we noted in our assessment of Stage 2 requirements. Since Stage 3 requires an enterprisewide perspective, the lack of oversight of these classes of investments precludes the successful completion of most Stage 3 critical processes. In addition, Stage 3 requires an enterprisewide perspective that FAA has not adopted, which would enable the JRC to oversee all major IT investments, regardless of life cycle phase or business unit. Although it can be appropriate for FAA to manage its NAS, in-service NAS, and non-NAS investments as separate subordinate portfolios—depending on the successful execution of all Stage 2 key practices—its enterprise-level portfolio should contain all major IT investments regardless of life cycle stage or business line. In building this enterprise-level portfolio, the JRC can choose whether to include specific investments based on predetermined criteria, as described by the ITIM framework. Until FAA fully implements the critical processes associated with managing its investments as a complete portfolio, it will not have the data or enterprisewide perspective it needs to make informed decisions about all of its major IT investments. In addition, FAA has not executed the six key practices for conducting PIRs. In June 2004, in response to a recommendation contained in our 1999 report that FAA initiate PIRs for projects or programs within 3 to 12 months of deployment or termination, the NAS Configuration Management and Evaluation Staff developed a proposed approach to PIRs, but this approach was not implemented. In November 2003, the life cycle management policy team proposed a change to the AMS that would require conducting these reviews, but there has been no action on the proposal. Although the JRC has recently reaffirmed its commitment to implement PIRs, there is no policy and no established process to carry them out. If PIRs are not conducted on a routine basis, then FAA will not be able to effectively evaluate the results of its IT investments; this will affect the agency’s ability to determine whether to continue, modify, or terminate an IT investment in order to meet its stated mission objectives. Table 10 summarizes the status of FAA’s critical processes for Stage 3, showing how many associated key practices it has executed. Once an agency has attained Stage 3 maturity, it evaluates its IT investment processes and portfolios to identify opportunities for improvement (Stage 4 capabilities). This entails (1) improving the portfolio’s performance and (2) managing systems and technology succession. We did not assess FAA’s capability for improving the portfolio’s performance, because it did not claim to be executing any of the relevant key practices in its self-assessment. According to ITIM, regarding system and technology succession management includes (1) defining policies and procedures for managing the IT succession process, (2) assigning responsibility for the IT succession process, (3) developing criteria for identifying IT investments that may meet succession status, and (4) periodically analyzing IT investments to determine whether they are ready for succession. This critical process enables an organization to recognize low-value or high-cost IT investments and augments the routine replacement of systems at the end of their useful lives. It also promotes the development of a forward-looking, solution- oriented view of IT investments that anticipates future resource requirements and allows the organization to plan appropriately. This process differs from the reselection activity in Stages 2 and 3 in that it focuses on anticipating and planning for the retirement of legacy systems and on meeting remaining requirements with other, perhaps new, systems. In addition, succession management takes place at the end of a system’s life cycle. FAA has not executed any of the nine key practices required to implement this critical process. Although the agency has defined procedures in AMS for retiring investments, it still needs to describe how to regularly review systems that are in operations in order to identify candidates for retirement. According to FAA, decisions on succession are made by the service organizations. However, no individual or group has been assigned responsibility for managing the succession process from an enterprise perspective, which would allow the FAA to better anticipate and plan for future resource requirements. Without an institutionalized process for succession management, the FAA may not be able to identify those IT investments that are eligible for succession in enough time to minimize the effect of the transition on their successors. In addition, by establishing an effective succession management process, the agency can identify systems for retirement, freeing resources for other, superior, investments. We have previously reported that to effectively implement IT investment management processes, organizations need to be guided by a plan that (1) is based on an assessment of strengths and weaknesses; (2) specifies measurable goals, objectives, and milestones; (3) specifies needed resources; (4) assigns clear responsibility and accountability for accomplishing tasks; and (5) is approved by senior management. FAA has begun to take steps to resolve some of the weaknesses identified in this report. For example, at a June 10, 2004, meeting, the JRC decided to incorporate budget justification documents (Exhibit 300s), which are currently prepared for the Office of Management and Budget (OMB) as part of the President’s Budget formulation process, into the AMS process for managing NAS investments. The Exhibit 300 will become the board’s decision-making document, and essential information from existing AMS- required documents—the investment management report, the acquisition strategy paper, the integrated program plan, and the requirements documents—will be incorporated into the Exhibit 300. The JRC also recently decided to implement PIRs in order to track metrics during program implementation. Finally, at that same meeting, the board decided to collectively determine, at the meeting where the F&E budget is approved, which F&E and OPS programs should be brought forward for review the following year. This decision serves to bring certain investments in the in-service management phase under the JRC’s direct purview, although it does not specify that consistent criteria be established, as the ITIM framework requires. FAA has also begun to initiate steps to bring more clarity to the ITEB’s responsibilities, although the specifics have yet to be defined. In its charter, the ITEB is charged with making investment decisions about non-NAS IT investments. This action would begin to bring all of the non-NAS investments under a single authority. The charter suggests that the ITEB choose among three options: (1) to send major non-NAS investment decisions to the JRC, (2) to make the decision itself, given an acceptable review process similar to the JRC processes, or (3) have the CIO, Chief Financial Officer, and owning assistant/associate administrator make the decision jointly. This description of the ITEB’s roles and responsibilities further alludes to the senior board’s evolving responsibility toward major non-NAS IT investments, although it falls short of laying out specific criteria for selecting which investments should be sent forward to the JRC. The ITEB has been given responsibility for four short-term initiatives as well, including establishing an agencywide cost control program for non- NAS expenditures and ensuring that all OMB Exhibit 300s receive a passing grade for the 2006 budget year. The ITEB has been charged with the long- term initiative of clearly delineating the roles it plays and its relationship with the more senior board. The successful completion of this initiative is likely to satisfy the single key practice that FAA has not yet executed in the Instituting the Investment Board critical process of the ITIM. The Chief Operating Officer’s recent reorganization of the ATO is intended to make the heads of the service units responsible for IT projects from their inception through the in-service management phase. This new organization is designed to support his expressed intentions to increase accountability for systems in operation in order to manage costs more effectively. According to the Chief Operating Officer, FAA recognizes that good processes are needed for both NAS and non-NAS to improve the way the agency manages its investments. While FAA has initiated these improvement efforts, it has not linked them together in a plan with the characteristics listed above that would help coordinate and guide the efforts. Until FAA develops a plan that would allow for the systematic prioritization, sequencing, and evaluation of improvement efforts, the agency risks not being able to effectively establish mature investment management processes. DOT has recently initiated several efforts that can serve to provide better departmental oversight of FAA investments. This fiscal year DOT and FAA reached an agreement by which DOT reviews FAA’s Exhibit 300s as part of the department’s annual budget process, in which all departmental components participate. Under this agreement, DOT conducts a review of all FAA Exhibit 300s starting in June of each budget year and culminating in the review of all Exhibit 300s by the Department Investment Review Board in late August, prior to the submission of the budget to OMB in September. As part of this agreement, DOT has outlined a process and schedule for reviewing the fiscal year 2006 budget justifications for major FAA programs and is monitoring FAA’s progress in meeting this schedule. In addition, the department has identified about a dozen programs that it plans to monitor regularly and has begun reviewing these programs through its senior investment management decision-making board, on which the FAA Administrator is a voting member. DOT has also requested that FAA set reasonable expectations for cost, schedule, and performance for its major projects and that it then report quarterly on variances to those expectations. FAA submitted its first quarterly report as of June 2004. These regular reports are intended to help DOT maintain oversight of FAA’s processes and ensure that they are appropriate and consistent with OMB’s requirements. Furthermore, the department is currently planning to issue an investment management guide that specifies minimum expectations that its operating administrations (including FAA) are to follow in managing their investments. According to DOT officials, FAA has been complying with the department’s requests for information to facilitate its oversight process. Department officials are attributing their increased oversight—and cooperation from FAA—to the fact that the department has recently reinstituted its own investment management processes. In addition, DOT officials said that FAA now understands the role the department can play in helping it to obtain the funding it needs for its programs. FAA has established most of the project selection and control capabilities needed to manage its NAS investments. This should help provide the executive-level decision-making and oversight capabilities required to establish accountability and guide major IT investments through most of their life cycles. However, weaknesses remain. For example, although business units are involved in the regular review of investments throughout their life cycles, the JRC may not review the performance of operations systems for several years unless they require significant additional funds. Also, FAA has yet to define and implement the practices it needs to select and control its non-NAS investments. Ultimately, because the JRC does not regularly review NAS systems during the in-service management phase and does not regularly review the non-NAS systems in general, significant portions of FAA’s approximately $2.5 billion investment in IT go without top-level executive oversight and are not viewed as part of an enterprisewide portfolio. FAA has taken some initial steps to implement PIRs, but it has not yet established a process to carry them out. The agency has begun to take some steps to develop improvements to address some of these weaknesses, such as establishing an Information Technology Executive Board with relevant responsibilities. In addition, the JRC has begun integrating some budgeting and oversight processes, and the Chief Operating Officer has begun to articulate a vision that includes additional accountability for investments in operations. But FAA has not developed a comprehensive plan to guide all improvement efforts. Such a plan would help coordinate and prioritize improvement efforts and help sustain commitment to the efforts under way. The increasing collaboration between FAA and DOT further contributes to the likelihood that the management of FAA’s investments will improve as FAA’s Exhibit 300s have the benefit of department-level review and the departmental investment review board conducts periodic reviews of selected projects. To strengthen FAA’s investment management capability and address the weaknesses discussed in this report, we recommend that the Secretary of the Department of Transportation direct the FAA Administrator to develop and implement a plan for improving FAA’s IT investment management processes. The plan should address the weaknesses described in this report, beginning with those we identified in our Stage 2 analysis and continuing with those we identified in our Stage 3. The plan should also draw together ongoing efforts as well as instituting new initiatives where called for. The plan should, at a minimum, provide for accomplishing the following: Define procedures for aligning the JRC and the newly established ITEB. Establish a process for the JRC to periodically reevaluate the alignment of projects in the in-service management phase with strategic goals and objectives. Establish a process for the JRC to regularly review the performance of IT systems throughout their life cycles and take corrective actions when expected performance is not being met. Define and implement an IT investment management structure, including an investment management board and a disciplined process for managing all non-NAS investments. Define and implement processes for managing major investments as part of an enterprise-level portfolio, including NAS F&E investments, NAS investments in the in-service management phase, and non-NAS investments. Define and implement processes for carrying out PIRs on investments as they enter the in-service management stage. In developing the plan, the FAA Administrator should ensure that it (1) specifies measurable goals, objectives, and milestones; (2) specifies needed resources; (3) assigns clear responsibility and accountability for accomplishing tasks; and (4) is approved by senior management. In implementing the plan, the FAA Administrator should ensure that the needed resources are provided to carry out the plan and that progress is measured and reported periodically to the Secretary of Transportation. In commenting on a draft of this report, DOT’s Director of Audit Relations stated via e-mail that DOT appreciated the opportunity to review and offer comment on our report and that GAO had done a good job keeping the report balanced and fair, showing where FAA has many capabilities in place and identifying areas that need improvement. The Director also provided a technical comment, which we have incorporated into the report. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to other interested congressional committees, the Director of the Office of Management and Budget, the Secretary of Transportation, FAA’s Administrator and CIO, and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at www.gao.gov. Should you or your offices have questions on matters discussed in this report, please contact me at (202) 512-9286 or Lester P. Diamond, Assistant Director, at (202) 512-7957. We can also be reached by e-mail at pownerd@gao.gov, or diamondl@gao.gov, respectively. Key contributors to this report were William G. Barrick, Niti Bery, Joanne Fiorino, Michael Giannone, Sabine R. Paul, and Nik Rapelje. The objectives of our review were to (1) evaluate FAA’s capabilities for managing its IT investments, (2) determine what plans the agency might have for improving these capabilities, and (3) describe how DOT oversees FAA’s investments and investment process. Because FAA told us that it managed its NAS and non-NAS investments differently, we performed separate assessments for the practices to evaluate FAA’s capabilities for managing IT investments. To address the first objective, for the NAS investments we reviewed the results of the agency’s self-assessment of Stages 2, 3, and 4 practices using GAO’s ITIM framework and validated and updated the results of the self- assessment through document reviews and interviews with officials. We reviewed written policies, procedures, and guidance and other documentation providing evidence of executed practices, including FAA’s Acquisition Management System guidance, mission analysis and investment analysis guidance, and memorandums. We also reviewed JRC guidance and records of decision, acquisition review guidance and meeting minutes, and variance reporting procedures and reports. We did not assess FAA’s progress in establishing the capabilities found in one of the two Stage 4 critical processes, entitled Improving the Portfolio’s Performance, or in any of the Stage 5 critical processes, because FAA acknowledged that it had not executed any of the key practices in these critical processes. For the non-NAS investments, we reviewed the results of FAA’s self- assessments of Stage 2 practices using GAO’s ITIM framework and conducted interviews to clarify and update the results. We did not perform a detailed assessment of these practices because they most likely will be superseded by a new process (when it is defined) for managing non-NAS investments, and non-NAS investments are of lower cost and impact to FAA. As part of our analysis, we selected three IT projects as case studies to verify that the critical processes and key practices were being applied. We selected projects that (1) supported different FAA functional areas, (2) were in different life cycle phases, and (3) required different levels of funding. The three projects are described below: FAA Telecommunications Infrastructure (FTI)—FTI is a performance- based telecommunications services contract for voice, video, and data point-to-point support for telecommunications for the National Airspace System and its support system. It contributes to both the separation of aircraft (the mission-support network) and other FAA uses (the operational network, e.g., e-mail and phone). FTI will replace the current telecom system. FTI will eliminate the need for other subnetworks, of which there are currently eight or nine, and therefore eliminate the management overhead associated with operating so many networks. The integration of multiple networks and subnetworks will provide a single source and single vehicle for telecom. FTI is in the Technical Operations unit and has estimated life cycle costs of $2 billion. The contract for FTI was awarded in June 2002. En Route Communications Gateway (ECG)—ECG is a mission critical gateway, or interface, for data from radar sites to Air Route Traffic Control Centers. ECG will serve as a single domain communications gateway and will provide the path for exchanging flight plan data from outside sources and transfer data among systems. ECG provides a commercial-off-the-shelf nondevelopmental item digital gateway using a modern, open and extensible platform consisting of modular scalable hardware components. ECG will incorporate interface capability to support legacy and future systems and will provide the capability to transition to modern network communications and access more surveillance sources. The flexibility provided by the ECG system architecture will facilitate the evolution of the En Route domain modernization. ECG will replace the Peripheral Adapter Module Replacement Item system and provide a modern domain gateway that will support the current and future En Route infrastructure. ECG is in the En Route & Oceanic Service group and has estimated life cycle costs of $442.5 million through September 2015. Voice Switching and Control System (VSCS)—In our review of the VSCS program, we focused our review on one of VSCS’s subcomponents, the VSCS Control Subsystem Upgrade (VCSU). The VCSU program, part of the Technical Operations Communications service group, is designed to maintain overall supportability of VSCS by replacing the hardware for the existing control subsystem, associated VSCS operational and application software, required software licenses, and supporting software and hardware documentation. Deliverables for the VCSU program include all hardware, spare parts, software, software licenses, system baseline documentation, training, and other technical documentation necessary to support the product at 21 locations. According to FAA, the VCSU program has a funding baseline of over $59 million and is in the operations and maintenance phase. For these projects, we reviewed project management documentation, such as mission needs statements, acquisition program baselines, and integrated program plans. We also interviewed the project managers for these projects. We compared the evidence collected from our document reviews and interviews to the key practices in ITIM. We rated the key practices as “executed” on the basis of whether the agency demonstrated (by providing evidence of performance) that it had met the criteria of the key practice. A key practice was rated as “not executed” when we found insufficient evidence of a practice during the review or when we determined that there were significant weaknesses in FAA’s execution of the key practice. To address our second objective, we obtained and evaluated documents showing what management actions had been taken and what initiatives had been planned by the agency. This documentation included JRC records of decisions, the agency’s capital investment guidance, and the recently formed ITEB charter and meeting minutes. We also interviewed the Chief Information Officer, other members of the JRC, and the Chief Operating Officer to determine what efforts FAA had undertaken to improve IT investment management processes. To address our third objective, we reviewed documentation on DOT’s process for reviewing FAA’s budget proposals and capital planning and investment control reviews. We also conducted interviews with both FAA and DOT officials, including DOT’s CIO and Director for Capital Planning and Investment Control to determine DOT’s oversight role in FAA’s investments and investment management processes. We conducted our work at FAA Headquarters in Washington, D.C., from October 2003 through July 2004, in accordance with generally accepted government auditing standards. ABA has an investment board that conducts periodic and monthly program reviews for all IT programs to determine whether a program will be approved as an IT investment. A life cycle process guide is now in place to direct the activities of the investment board along with providing oversight of IT projects within ABA. The business needs of a project within ABA, along with the dates for achieving them, need to be aligned with the strategic goals established in the FAA Flight Plan. Projects or systems that are no longer aligned with the Flight Plan will be decommissioned. A project management plan identifies, among other things, the system’s users, customers, and types of services to be provided. Selecting and reselecting an IT investment within ABA involves both the executive management team and ABA’s CIO team. The executive management team reviews the business needs of the investment and compares them against the ABA’s IT budget, while ABA’s CIO team is involved with the selecting and reselecting processes by analyzing the technical costs associated with the IT investment and comparing those technical costs against the ABA’s IT budget. ABA uses its life cycle process guide to help manage its $25 million IT budget, which consists of 22 or 23 financial systems, 5 or 6 of them considered major programs under OMB’s definition of a major IT investment. A requirement of the life cycle process guide is for every critical system in ABA to have a detailed project management plan that addresses performance measures such as cost, schedule, benefits, and risks. The day-to-day progress of IT projects is tracked against critical milestones that have been already established through weekly summary reviews with IT staff. For major IT projects, biweekly meetings are conducted that address any concerns with meeting the performance measures. ABA captures its IT asset information using its Information Technology Investment Portfolio System (ITIPS), which is available to all ABA management and system support personnel. The information in ITIPS is used to manage projects that are in production as well as ensuring that the life cycle activities are in alignment with FAA’s mission statements. ARA uses its Operations Resource Management Team guide to select, control, and evaluate ARA IT investments. The team composed of representatives from ARA service units. ARA investments are controlled and tracked through quarterly reviews. These reviews look at the cost, schedule, and overall performance of the investment. The business needs for ARA investments need to be mapped back to the Flight Plan. A monthly status review report is prepared in order to ensure that the business needs are tracking back to the Flight Plan. ARA does not have any well-defined selection criteria since each program uses its own configuration management plan. ARA Ops build process guides the establishment of new projects. A project plan does exist, along with established expenditures, which the program managers submit to the ARA CIO on a monthly basis. These monthly status reports occur between the CIO and the program managers to decide if an investment’s resources, such as funding, need to be reallocated. Once the CIO and program manager decide that it is necessary for an investment’s resources to be reallocated, the CIO will discuss the need further with the Deputy Associate Administrator for ARA, who ultimately will determine whether a program will receive additional resources, such as funding. With respect to the level of interaction that ARA has had with the JRC in the past, only one program from ARA, NextGen, has gone before the JRC. According to the ARA CIO, in order for a program to go to the JRC, there must be justification made to the council that the program is fully operational and is considered to be a benefit and a priority to FAA. The ARA Deputy Associate Administrator will determine if a program should go before the JRC for approval and funding. The configuration control board uses a database to capture asset inventory data about the systems that are owned by the ARA CIO. According to the ARA CIO, in order for IT assets to be effectively managed in ARA, there needs to be vision from AIO about what programs to invest in over the next 5 years. The Information Resource Management Executive Board is responsible for selecting, controlling, and evaluating ATS IT investments. Not all services within ATS have defined their business needs. Even though ATS has the NAS Support Integration Process (NSIP) data repository available for capturing IT asset information, including business needs, and for defining system users, there is no consistency in terms of the records being complete because there are systems within ATS that have not registered with NSIP. The ATS CIO manages the selection process, which begins with the NSIP registration criteria. Each business unit within ATS has its own project management plan and procedures. The day-to-day tracking of projects as well as the monitoring of whether corrective actions are being executed is also the responsibility of the individual business units. Even though the individual business units are tasked with this level of responsibility, the ATS CIO does play an oversight role by setting the criteria and policies for the investments to be made for the projects. ATS uses the NSIP meta data repository to collect any changes to the IT projects and systems by providing a full declaration of the project or system. This includes providing information to help ATS avoid unwanted costs due to systems having redundant functionality and determining whether a system’s or a project’s functions match the stated mission goals for ATS. NSIP also handles the technical rollover for ATS systems or projects. AIO’s investment management process can be characterized as iterative and well managed, but undocumented. The AIO Business Plan and IT Strategy are used to ensure that when funds are appropriated and allocated that they map back to the Flight Plan. Investments are controlled or tracked by the Deputy CIO on a monthly basis to get an indication as to where the program is in the process against the expenditures that have been already established. Weekly meetings are held with the unit’s CIO to discuss any issues regarding AIO’s investment management process. AIO does not have any written policies or procedures for identifying business needs for its IT projects. Only one of its major projects, NAS Adaptation Service and Environment, has documented its requirements, which includes specific users. AIO uses an undocumented process for reviewing new IT proposals to reach an agreement on selection. There are no AIO-wide policies or procedures for managing projects or investment oversight. The Information Technology Executive Board (ITEB) has been formed to provide a governing structure for non-NAS programs. One of the targets for ITEB is to look at cost control and cross-cutting IT initiatives by involving the heads of the lines of business. The ITEB is also going to be involved with improving the scores on the Exhibit 300 business cases for OMB. AIO uses ITIPS to track its asset inventory and IT investments. The Deputy CIO of AIO is responsible for ensuring that the inventory located in ITIPS meets the needs of AIO’s investment management process. According to AIO, the information within ITIPS is updated at least twice a year. AHR does not have an investment board. Instead, AHR’s senior management is responsible for selecting, controlling, and evaluating all IT investments by using established agency acquisition policies and procedures to conduct investment management decisions. Business needs and specific users for each project are identified within the project plan and are aligned with the AHR Strategic Plan, the FAA Flight Plan, and the AIO Plan. AHR is also aligning its business needs to the ITEB plans. Business needs are re-evaluated on a quarterly basis to ensure that a project is aligned with FAA’s strategic goals and objectives. AHR senior management uses its prioritization process to evaluate and select investments for funding. The office and center directors determine their requirements and then a budget request is submitted for proposal funding. AHR receives an allowance amount from the budget office. The first priority is to handle personnel payments. The remaining balance is then redistributed to the business divisions. The “building blocks” process starts at this point. This is when base funding is reviewed to decide if a current investment needs continued funding by asking questions about the importance of continuing the funding of a particular project by looking at the project activity and what the impact will be if this project is no longer funded. Each division will submit a list of prioritized projects with costs to the directorate. This list may exceed the budget level. The directorate will reprioritize the original list. AHR has a Human Resource Management Automation Plan that contains procedures for approving IT projects, and describes the policies and procedures that AHR uses for project management. Despite having project management policies and procedures, not all projects within AHR have a formal project plan. The size and scope of the project are two factors that help determine whether a project has a formal project plan. AHR Division Managers ensure that projects are on time by performing quarterly reviews that assess a project’s cost and schedule. AHR uses a color scheme (red, green, and yellow) to indicate the schedule status of major milestones. AHR uses the ITIPS as its inventory for making investment management decisions. AHR projects are listed in ITIPS, along with business cases. The IT Configuration Management Board is ARC’s investment review board. The board’s charter has recently been redone to provide more traceability back to the Flight Plan. The board functions include evaluating potential IT investment options for ARC, making recommendations on IT investment, establishing ARC-wide IT standards, and developing and maintaining investment policies and procedures. The board is led by the unit’s CIO and includes four IT managers from the regional offices and aeronautical center and two members from the ARC Management Team. The ARC Management Team makes the final- selection decisions. The IT investment management decisions are then incorporated into the ARC Business Plan. The ARC unit is also involved with cross-organizational investment decisions for FAA through its membership on the FAA CIO Council. Business needs are identified through entries made in ITIPS, along with documentation from Exhibit 300s and Exhibit 53s. ARC does not have its selection criteria documented. To evaluate and select IT investments, the ARC IT Configuration Management Board considers such things as benefits to ARC across the regions, expected return on investment, technical feasibility, and risk. The ARC business plan and the Flight Plan are the documents that address these priorities. ARC does not have policies or procedures for project management. Instead, ARC uses a weekly teleconference to address expectations and progress of ARC-wide IT initiatives at the IT manager level across ARC. According to the ARC CIO, a second teleconference has been added to discuss portfolio management—schedule, budget, training, and deployment along with whether the project will be integrated with other lines of business. ARC uses ITIPS as its standardized repository for collecting asset information that will be useful for ARC’s IT investment management decisions by providing information about what types of systems and functions are available and how they are supporting a specific business issue. Similar to an IT investment board, AVR has a two-tiered management structure that is composed of the AVR management team and the CIO management team. The AVR management team includes the Associate Administrator and the Service Directors who make the final decisions based upon recommendations and input from the CIO management team and its business partners from each of the service units. According to AVR, its IT investment process guide is still under development and will be completed at the end of Fiscal Year 2004. Each line of business within AVR identifies and documents its business needs including project requirements and specific users. Once the business needs have been identified, the IT Management and Resources section prioritizes them for funding. Programs in AVR are reviewed quarterly. For major projects, meetings are designed to look at project milestones to see if they are being met. These meetings are carried out biweekly and presented to the AVR management team. The AVR CIO management team is responsible for monitoring projects and reporting to the AVR Management team. Biweekly meetings are held for major projects within AVR. AVR’s system inventory is a part of its enterprise architecture. The system inventory is being used primarily in developing the Exhibit 300s. The performance of IT projects in AVR is monitored daily, based upon each project’s individual plan, using project management tools such as MS Project. According to AVR, not all projects have a project plan in place, but AVR is trying to make it a requirement. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
The Federal Aviation Administration's (FAA) mission is to promote the safe, orderly, and expeditious flow of air traffic in the United States airspace system, commonly referred to as the National Airspace System (NAS). To maintain its ability to effectively carry out this mission FAA embarked, in 1981,on a multi-billion dollar effort to modernize its aging air traffic control (ATC) system, the principle technology component of the NAS. Yet the NAS modernization has continued to be plagued by cost increases, schedule delays, and performance shortfalls. To gain insight into how FAA is meeting its management challenges, congressional requesters asked GAO to evaluate FAA's processes for making IT investment management decisions. The objectives of this review included (1) evaluating FAA's capabilities for managing its IT investments and (2) determining what plans, if any, the agency might have for improving these capabilities. Judged against the criteria of GAO's framework for information technology investment management (ITIM), which measures the maturity of an organization's investment management processes, FAA has established about 80 percent of the basic selection and control practices that it needs to manage its mission-critical investments. For example, business lines actively monitor projects throughout their life cycles. However, the agency's senior IT investment board does not regularly review investments that are in the "in-service management," or operational, phase of their life cycles, and this creates a weakness in FAA's ability to oversee more than $1 billion of its IT investments. In addition, the agency has not yet established the key practices that would allow it to manage all of its investments as one portfolio--an integrated set of competing options. Until FAA has established the practices that would enable it to effectively manage its annual IT budget of about $2.5 billion, agency executives lack assurance that they are selecting and managing the mix of investments that best meets the agency's needs and priorities. The agency has initiated efforts to improve its investment management processes, but it has not yet developed and implemented a comprehensive plan--supported by management--to guide all of its improvement efforts. Such a plan is crucial in helping FAA to coordinate and prioritize its improvement efforts and sustain its commitment to the efforts it already has under way. Without such a plan--and controls for implementing it--FAA will be unlikely to develop a mature investment management capability.
The legal framework for addressing and paying for maritime oil spills is identified in OPA, which was enacted after the 1989 Exxon Valdez spill. OPA places the primary burden of liability and the costs of oil spills on the owner and operator of the vessel or on shore facility and the lessee or permittee of the area in which an offshore facility is located. This “polluter pays” framework requires that the responsible party or parties assume the burden of spill response, natural resource restoration, and compensation to those damaged by the spill, up to a specified limit of liability. In general, the level of potential exposure under OPA depends on the kind of vessel or facility from which a spill originates and is limited in amount unless the oil discharge is the result of gross negligence or willful misconduct, or a violation of federal operation, safety, and construction regulations, in which case liability under OPA is unlimited. For oil spills from an offshore facility, such as the Deepwater Horizon, liability is limited to all removal— or cleanup—costs plus $75 million. Under OPA, before any vessel larger than 300 gross tons can operate in U.S. waters, the owner/operator must obtain a Certificate of Financial Responsibility (COFR) from NPFC. This COFR demonstrates that the owner/operator has provided evidence of financial responsibility to pay for removal costs and damages up to the liability limits required by OPA. These OPA requirements for demonstrating financial responsibility apply only to the statutory maximum amount of potential liability under OPA, although states may impose additional liabilities and requirements related to oil spills in state waters. OPA requires that, subject to certain exceptions, such as removal cost claims by states, all nonfederal claims for OPA-compensable removal or damages be submitted first to the responsible party or the responsible party’s guarantor. If the responsible party denies a claim or does not settle it within 90 days, a claimant may present the claim to the federal government to be considered for payment. To pay specified claims above a responsible party’s liability limit, as well as to pay claims when a responsible party does not pay or cannot be identified, OPA authorizes use of the Fund subject to limitations on the amount and types of costs. For example, under OPA, the authorized limit on Fund expenditures for a single spill is currently set at $1 billion (without consideration of whether the Fund was reimbursed for any expenditures). In addition to paying claims, the Fund is used to reimburse government agencies for certain eligible costs they incur. Further, within the $1 billion cap, the costs for conducting a natural resource damage assessment and claims paid in connection with any single incident shall not exceed $500 million. OPA provides that the President designate the federal officials and that the governors designate the state and local officials who act on behalf of the public as trustees for natural resources. OPA regulations provide that the trustees may recover costs for natural resource damage assessment and restoration. The Fund may not be used for certain types of personal injuries or damages that may arise related to an oil spill incident, such as financial losses associated with oil company investments by members of the public. Recovery for such damages and injuries may be governed by other federal statutes, common law, or various state laws. Federal agencies are authorized to use the Fund to cover their oil removal costs from the affected areas to the extent the Fund has funds available within the $1 billion cap. The federal government is entitled to reimbursement from responsible parties for such costs. The Coast Guard’s NPFC administers uses of the Fund to reimburse government agencies for their removal and cleanup costs; adjudicating individual and business claims submitted to the Fund for payment; and pursuing reimbursement from the responsible party for costs and claims paid by the Fund. NPFC bills the responsible parties directly, including BP in this case, for costs government agencies have incurred, and all payments received from responsible parties are deposited into the Fund. OPA defines the costs for which responsible parties are liable and for which the Fund is made available for compensation in the event that the responsible party does not pay, cannot pay, or is not identified. As described in greater detail in appendix V, “OPA compensable” costs include two main types:  Removal Costs: Removal costs are incurred by the federal government or any other entity taking approved action to respond to, contain, and clean up the spill. For example, removal costs include cleaning up adjoining shoreline affected by the oil spill and the equipment used in the response—skimmers to pull oil from the water, booms to contain the oil, planes for aerial observation—as well as salaries, travel, and lodging costs for responders.  Damages: OPA-compensable damages cover a wide range of both actual and potential adverse impacts from an oil spill. For example, damages from an oil spill include the loss of profits to the owner of a commercial charter boat if the boat was trapped in port because the Coast Guard closed the waterway in order to remove the oil, or personal property damage to the owner of a recreational boat or waterfront property that was oiled by the spill, for which a claim may be made first to the responsible party, if possible, or to the Fund. In addition to OPA-compensable costs, the federal government can also incur other non OPA-compensable costs associated with oil spills. For example, the federal government had various non-OPA-compensable costs for the Deepwater Horizon oil spill, such as Department of Homeland Security (DHS) costs associated with providing additional staff to NPFC for receiving and processing claims. The National Oil and Hazardous Substances Pollution Contingency Plan, more commonly called the National Contingency Plan is the federal government’s blueprint for responding to oil spill and hazardous substance releases. The National Contingency Plan provides the organizational structure and procedures for preparing for and responding to discharges of oil and releases of hazardous substances, pollutants, and contaminants. The plan outlines approved procedures and removal activities when responding to an oil spill and identifies the following four phases of response operations for oil discharges: 1. Discovery and Notification include activities conducted to discover oil spills or to notify appropriate authorities of oil spills. 2. Preliminary Assessment and Initiation of Action include activities conducted to assess the magnitude and severity of the spill and to assess the feasibility of removal and plan appropriate actions. These activities are necessary whether or not the responsible party is taking action. 3. Containment, Countermeasures, Cleanup, and Disposal include oil spill cleanup activities such as hiring contractors and transporting and staging required supplies and needed equipment. 4. Documentation and Cost Recovery include the activities necessary to support cost recovery and record uses of the Fund. Three of the four phases for oil removal remain under way for the Deepwater Horizon incident, and the operational response is likely to continue for years. The first phase, discovery and notification, is substantially complete. Subject to certain thresholds, the costs incurred in phases two, three, and four are eligible to be paid from the Fund. The Fund’s primary revenue source is an 8 cent per barrel tax on petroleum products either produced in the United States or imported from other countries. Other revenue sources include recoveries from responsible parties for costs of removal and damages, fines and penalties paid pursuant to various statutes, and interest earned on the Fund’s U.S. Treasury investments. In fiscal year 2009, the barrel tax was 92 percent of the Fund’s revenue. As shown in figure 1, the Fund’s balance has varied over the years. The barrel tax expired in December 1994 and was reinstituted at 5 cents per barrel in April 2006 as mandated by the Energy Policy Act of 2005. The Energy Improvement and Extension Act of 2008 increased the tax to 8 cents per barrel and provides that the Fund’s barrel tax shall expire after December 31, 2017. In fiscal year 2011, the increase to the Fund is primarily attributable to reimbursements received from responsible parties for the Coast Guard’s costs incurred in response to the Deepwater Horizon incident. Specifically, as of May 31, 2011, the Coast Guard has billed and received from responsible parties, $315.3 million for Coast Guard recoverable, or indirect costs, such as personnel and equipment. According to the agency, the Coast Guard has historically viewed its OPA recoverable costs as activities normally funded through the agency’s operating expense appropriation, and thus it has not sought reimbursement for these costs from the Fund. As shown in figure 2, the Fund has been administratively divided into two major components—the Emergency Fund and the Principal Fund— administered by the Coast Guard’s NPFC. The Emergency Fund authorizes the President to make available $50 million each year to cover immediate expenses associated with mitigating the threat of an oil spill, costs of oil spill containment, countermeasures, and cleanup and disposal activities, as well as paying for other costs to initiate natural resource damage assessments. Amounts made available remain available until expended. For the Deepwater Horizon oil spill, the Coast Guard’s Federal On-Scene Coordinator used the Emergency Fund to pay for oil spill removal activities (i.e., the equipment used in removal activities and for the proper disposal of recovered oil and oil debris), and the Federal Natural Resource Damage Trustees also entered into reimbursable agreements with NPFC with respect to funding for activities to initiate natural resource damage assessments. To the extent that available amounts are inadequate for an emergency (as was the case in the Deepwater Horizon oil spill), the Maritime Transportation Security Act of 2002 granted authority for the Coast Guard to advance up to $100 million to pay for oil spill removal activities, and that amount was advanced from the Principal Fund to the Emergency Fund. In June 2010, Congress amended OPA to authorize emergency advances for the Deepwater Horizon oil spill in increments of up to $100 million for each cash advance, but the total amount of all advances may not exceed the $1 billion per incident cap. In contrast to the Emergency Fund, the Principal Fund is to be used to provide funds for natural resource damage claims, loss of profits and earning capacity claims, and loss of government revenues. The Principal Fund also provides for certain agency appropriations including the Coast Guard, Environmental Protection Agency (EPA), and the Department of the Interior (DOI)—each of which receives an annual appropriation from the Fund through the Principal Fund to cover administrative, operational, personnel, and enforcement costs. Consistent with its Fund management responsibilities, in response to the Deepwater Horizon oil spill, NPFC is responsible for billing the responsible parties, including BP, directly for costs that government agencies have incurred. The payments NPFC receives from BP are to be deposited into the Fund and NPFC reimburses agencies for their removal costs. Funds are to be disbursed from the Fund to government agencies using two vehicles—Pollution Removal Funding Authorizations (PRFA) and Military Interdepartmental Purchase Requests (MIPR). The PRFA commits the Fund to reimburse costs incurred for agreed-upon pollution response activities undertaken by a federal agency assisting the Federal On-Scene Coordinator. The terms of a PRFA include relevant (1) personnel salary costs, (2) travel and per diem expenses, (3) charges for the use of agency-owned equipment or facilities, and (4) expenses for contractor or vendor-supplied goods or services obtained by the agency for removal assistance. Similarly, the Federal On-Scene Coordinator may issue a MIPR for agreed-upon activities of the Department of Defense (DOD) or its related components and for some other agencies’ activities. In contrast to PRFAs, MIPRs generally commit the Fund to disburse funds for oil spill response activities prior to conducting the activity and incurring the related costs. However, for the Deepwater Horizon oil spill, both NPFC and DOD established procedures for submitting documentation on a regular basis for MIPRs authorized in response to this spill of national significance. The Coast Guard, without in any way relieving the other responsible parties it identified of liability, approved BP’s advertisement of its claims process. In response to economic harm caused by the Deepwater Horizon oil spill and to fulfill its obligations as a responsible party, BP established a claims process and multiple claims centers throughout the Gulf states. On May 3, 2010, BP began paying emergency compensation to individuals and businesses. BP stated that emergency payments would continue as long as individuals and businesses could show they were unable to earn a living because of injury to natural resources caused by the oil spill. According to BP, it would base emergency payments on 1 month of income and would be adjusted with additional documentation. BP has been working to ensure that the other Deepwater Horizon oil spill responsible parties contribute to the response. On May 20, 2011, BP announced that it had reached an agreement with MOEX Offshore 2007 LLC and its affiliates to settle all claims between the companies related to the Deepwater Horizon oil spill, which included MOEX paying $1.065 billion to BP. Additionally, on October 17, 2011, BP announced that it had reached an agreement with Anadarko Petroleum Company to settle all claims between the companies related to the Deepwater Horizon oil spill, which included Anadarko paying $4 billion to BP. On June 16, 2010, President Obama announced that BP had agreed to set aside $20 billion to pay certain economic damage claims caused by the oil spill. On August 6, 2010, BP established an irrevocable Trust and committed to fund it on a quarterly basis over 3-1/2 years to reach the $20 billion total (as shown in fig. 3). The Trust is to pay some OPA- compensable claims as well as some other claims for personal injuries that are not OPA-compensable, but for which BP would be liable under other law. On August 23, 2010, the GCCF took over the administration of claims process and the centers BP had established. Since it began operating, the GCCF has offered the following kinds of payments:  Emergency Advance Payments: Payments available to individuals and businesses that were experiencing financial hardship resulting from damages incurred from the Deepwater Horizon oil spill. GCCF considered claims on emergency payments that were submitted by November 23, 2010.  Quick Payment Final Claim: On December 13, 2010, BP announced that individuals and businesses that had received emergency payments from the GCCF were eligible for a quick payment final claim, which offers a fixed amount of $5,000 for individuals and $25,000 for businesses. Acceptance of such a claim would resolve all claims by that claimant against BP including past and future alleged damages. The GCCF Protocols for Interim and Final Claims provides that final claims can be submitted to the GCCF through August 23, 2013.  Final Payment: Those who do not choose or are not eligible for the quick payment may submit a full review final payment claim for all documented losses and damages. Acceptance of a final claim would resolve all claims by that claimant against BP including past and future alleged damages. Under GCCF procedures, claimants will have until August 23, 2013, to estimate damages and submit claims for final payment. Interim Payments: The alternative to a final payment is to make an interim payment claim for past damages that have not been compensated. Individuals and businesses receiving interim payments are not required to sign a release of liability and may file a final claim at a later date. The GCCF Protocols for Interim and Final Claims provides that interim claims can be submitted to the GCCF through August 23, 2013. As of May 31 2011, GCCF has paid $4.2 billion for individual and business claims as shown in table 1. While the GCCF is scheduled to stop receiving claims on August 23, 2013, BP’s obligation, as a responsible party under OPA, to receive claims will continue after the GCCF closes. Both the individual circumstances of the Deepwater Horizon incident, as well as the overall framework of how the federal government responds to oil spills, present a mix of financial risks to the Fund and the federal government. The extent of financial risks to the federal government from the Deepwater Horizon is closely tied to BP and the other responsible parties and guarantors. Because the federal government’s Fund would pay if the responsible party (BP through its Trust, for example) did not, and given the expectation for numerous expenses to be paid from the Trust and the fact that the full amount of damages may not be fully determined for some time, the extent of any long-term financial risks for the federal government as a result of this spill is not clear. Federal agency cleanup and restoration activities are underway and agencies continue to incur costs and submit them for reimbursement. As a result, it is possible that expenditures from the Fund for Federal removal costs and claims will reach the $1 billion cap, as the cap balance was over $626 million on May 31, 2011. When the cap balance reaches the total expenditure cap of $1 billion, no further payments to reimburse agencies’ costs (or to pay individual or business claims if not paid by the responsible parties) can be made from the Fund, so federal agencies would no longer be able to obtain reimbursement for their costs. Finally, the federal government’s longer-term ability to provide financial support in response to future oil spills is also at risk because the Fund’s primary source of revenue, a tax on petroleum products, is scheduled to expire in 2017. BP has committed to set aside $20 billion to cover potential Deepwater Horizon oil spill expenses—and has stated its intent to pay expenses over the $20 billion if needed. BP’s track record for reimbursing federal agencies for their expenses to this point has been favorable. For example, as of May 31, 2011, NPFC had sent 11 invoices to all of the responsible parties covering federal and state OPA-compensable costs totaling $711 million and BP paid all 11 invoices. However, until the total expenses of the Deepwater Horizon oil spill have been fully determined and those amounts have then been paid by and reimbursed to the federal government, the extent of any federal government financial exposure remains unknown. The financial responsibility for the spill will ultimately be determined through a lengthy and complex process involving the application of different laws and regulations, and depends upon a continuation of the ability of the responsible parties to pay expenses associated with the Deepwater Horizon oil spill. Although BP has established a $20 billion Trust to pay claims from individuals and businesses harmed by the spill, a number of uncertainties regarding the Trust’s uses may impact its ability to adequately reimburse claimants, increasing the risk that the federal government will ultimately be responsible for paying the remaining claims. Although all uncertainties—and the associated expenses—may not be known for many years, some uncertainties that are known relate to the following issues.  The federal government has begun an extensive natural resource damage assessment process, but the associated costs have yet to be determined. In order to start the process, in May 2010, BP agreed to provide $10 million to DOI and $10 million to the National Oceanic and Atmospheric Administration (NOAA) in the Department of Commerce. Also, in April 2011, BP committed up to $1 billion from the Trust to projects to help restore damaged natural resources in the Gulf of Mexico, such as the rebuilding of costal marshes, replenishment of damaged beaches, conservation of sensitive areas for ocean habitat for injured wildlife, and restoration of barrier islands and wetlands that provide natural protection from storms. The natural resource damage assessment and restoration process will take years to complete, so the full costs for which BP and the other responsible parties are liable have yet to be determined. The National Commission on BP Deepwater Horizon Oil Spill and Offshore Drilling report estimates that fully restoring the Gulf will take $15 billion to $20 billion and over 30 years. If the responsible parties are unable or unwilling to pay, then the agencies’ costs for the natural resource damages, including the costs to assess and restore, rehabilitate, replace, or acquire equivalent natural resources, would need to be reimbursed from the Fund (provided that funds were still available, given the $1 billion per incident cap).  The responsible parties also are likely to face fines and penalties which have yet to be determined and which will be levied by federal and state governments. In particular, under the Clean Water Act, liable parties face substantial administrative and civil penalties that may be imposed by EPA or DHS. According to the BP Oil Spill Commission Report, the maximum Clean Water Act civil penalties could range from $4.5 billion to $21 billion.  BP and the other responsible parties face over 500 lawsuits from the federal government, states, investors, employees, businesses, and individuals. The extent to which these lawsuits will impact the responsible parties financially is uncertain at this time since they will take years to litigate. BP has stated that it may use the Trust to pay lawsuit settlements as well as for paying claims and for natural resource damages. Justice is continuing to evaluate federal government costs incurred related to the Deepwater Horizon oil spill that are not OPA- compensable. On May 13, 2011, Justice sent the responsible parties an invoice requesting reimbursement to the federal government for $81.6 million (for agencies’ costs incurred through December 2010). Although BP has stated that it will pay expenses over the $20 billion, if necessary, it is uncertain how this would be accomplished over time, thus posing an element of risk to the federal government. In addition, although MOEX and its affiliates have settled with BP by paying $1.065 billion and Anadarko settled with BP which included a payment of $4 billion, other responsible parties have not reached a settlement. If BP becomes unable to pay future cleanup costs, individual and business claims, and natural resource restoration costs, the federal government may need to consider paying costs and then pursuing reimbursement from the other responsible parties. NPFC’s Deepwater Horizon oil spill amounts counted towards this cap was $626.1 million as of May 31, 2011, and is thereby approaching the $1 billion per-incident cap mandated by OPA. The $626.1 million consists of $128.0 million incurred by the Coast Guard and $498.1 million incurred by other agencies. Once expenditures from the Fund reach the cap, NPFC will be statutorily barred from reimbursing federal agencies for response and restoration work, or paying individuals and businesses to settle claims. Consequently, if federal agencies did not receive dedicated appropriations for oil spill costs, the federal agencies would be faced with reallocating their appropriated funding to cover oil spill costs, or seeking additional funding from Congress. In November 2010, we suggested that Congress may want to consider setting a Fund cap associated with an incident, based upon net expenditures (expenditures less reimbursements). As of May 31, 2011, government agencies continue to submit documentation of their Deepwater Horizon oil spill recovery costs for reimbursement from the Fund. (App. VII provides information about government agencies’ authorized response costs and amounts reimbursed.) Further, although as of May 31, 2011 all individual and business claims reviewed by NPFC have been denied, claims continue to be submitted. According to NPFC officials, individuals and businesses will continue to submit claims associated with the Deepwater Horizon oil spill for several years. In addition, the natural resources restoration process is beginning and these associated costs will accumulate over many years. Uncertainties exist regarding the primary revenue source of the Fund, which is set to expire in 2017, and the potential for future oil spills. If the Fund’s primary source of revenue expires, this could affect future oil spill response and may increase risk to the federal government. Also, although the Deepwater Horizon oil spill was the largest oil spill disaster in U.S. history, annually over 500 spills of varying size and response occur.  The per barrel tax revenue. A provision of The Energy Improvement and Extension Act of 2008 mandates that the Fund’s primary source of revenue, a per barrel tax, is set to expire on December 31, 2017. Therefore, even with substantial amounts reimbursed by BP, the Fund balance would likely decrease as a result of the expiration of its primary source of funding and the expectation of future Deepwater Horizon costs. This could raise the risk that the Fund would not be adequately equipped to deal with future spills, particularly one of this magnitude, and it will be important for Congress to determine a funding mechanism for the Fund going forward. The two other sources of revenue are cost recoveries from responsible parties and interest on the Fund principal from U.S. Treasury investments. As we reported in September 2007, the balance of the Fund generally declined from 1995 to 2006 mostly because the per barrel tax expired in December 1994 and revenue was not collected from January 1995 to March 2006.  The potential need to fund the response to future spills poses risks. The possibility of needing to respond to another spill of national significance increases the risk to the Fund and the federal government. In fiscal year 2011 alone, the Fund has already paid for 267 oil spills through May 31, 2011. According to NPFC officials, on an annual basis, approximately 500 spills with varying costs and magnitude occur. In 2007, we reported that since 1990 approximately 51 spills amounting to over $1 million have occurred, and that responsible parties and the Fund have spent between $860 million and $1.1 billion for oil spill removal costs and compensation for damages. Responsible parties paid between 72 and 78 percent of these expenses, while the Fund paid the remainder. As of May 31, 2011, the Fund’s balance was approximately $2.0 billion. The federal government would need to consider using other sources of funds particularly if another spill of national significance occurs and if the responsible party(ies) are unable or unwilling to pay. Our testing of the Coast Guard’s controls over Deepwater Horizon claims processed as of April 30, 2011, and cost reimbursements processed as of April 20, 2011, showed that adjudicated claims processed and costs reimbursed were consistent with its procedures. The Coast Guard’s operating practices in these areas have changed to reflect the largely unprecedented size and evolving scope of the Deepwater Horizon incident. It has updated its cost reimbursement procedures to incorporate lessons learned from the initial response to this spill and although it has not yet updated its procedures for processing claims from spills of national significance to reflect lessons learned from its experiences processing Deepwater Horizon claims, it has plans to do so. We found that internal controls related to the documentation, review, and adjudication of individual and business claims submitted following the Deepwater Horizon oil spill were operating in accordance with established policies and procedures. During the period September 1, 2010, through May 31, 2011, NPFC received 901 Deepwater Horizon claims totaling $238 million. Of these claims, NPFC has finalized 570, all of which resulted in a denial or a withdrawal by the claimant. Our testing of a statistical sample of 60 out of the 432 Deepwater Horizon finalized claims through April 30, 2011 found that NPFC had followed its policies and procedures. Specifically, all claims  were submitted in writing, for a sum certain amount, and included the required claimant information (i.e., address, nature and extent of the impact of the incident, etc.); complied with OPA’s order of presentment (which requires that all claims for removal costs or damages must be presented first to the responsible party for payment), and verified that claimants had filed with the responsible party first before submitting their claim to NPFC; included evidence submitted by the claimant, or if needed, NPFC sent a letter to the claimant requesting additional support;  were adjudicated within the time provided by regulation;  underwent legal review and were submitted within the required time frame, if reconsideration was requested; and  when denied, were appropriately transmitted by sending a denial letter to the claimant along with a Claim Summary/Determination Form explaining the basis for denial. However, because all finalized claims resulted in denials or withdrawals, our testing could not assess the effectiveness of NPFC’s controls over payments to individuals and business claimants. Our statistical testing of 57 of 954 Deepwater Horizon cost reimbursements for government oil spill response activities from the Fund between April 20, 2010, and April 20, 2011, found that in all cases NPFC had followed established policies and procedures. Specifically, NPFC  accepted only cost reimbursement packages from government agencies with a signed PRFA or MIPR agreement in place for Deepwater Horizon response costs;  determined that the Federal On-Scene Coordinator certified that all services or goods were received;  ensured that supporting cost documentation submitted for reimbursement complied with the PRFA statement of work or MIPR agreement;  wrote a letter to FINCEN authorizing payment (signed by an NPFC Case Officer for the amount disbursed from the Fund under the appropriate PRFA or MIPR); and  obtained supporting documentation from the government agency requesting reimbursement. NPFC has strengthened its cost reimbursement guidance to reflect lessons learned from experiences during the initial Deepwater Horizon oil spill response, and officials told us they planned to take similar steps to update its claims processing guidance. Updating NPFC’s claims procedures to fully reflect Deepwater Horizon lessons learned will be critical should another spill of national significance occur. On April 14, 2011, NPFC issued an appendix for its cost reimbursement procedures manual modifying the procedures the agency is to follow for spills of national significance. This appendix is based on the lessons learned from addressing the unprecedented challenges posed by the Deepwater Horizon oil spill. It provides guidance, for example, targeting some of the issues that arose related to the management of finances, including cost documentation requirements for MIPRs with DOD. Specifically, the modified procedures provide that MIPRs will be reimbursed after the cost documentation is reviewed and work completion verified. NPFC officials told us that its current claims processing practices have also evolved since April 2010 to reflect lessons learned from the Deepwater Horizon oil spill. Over the past 10 years, NPFC typically received, on average, fewer than 300 claims each year. However, in light of the dramatic increase in the number of Deepwater Horizon oil spill claims received, NPFC refined its practices to augment its claims processing capacity. These practices included using contractors, Coast Guard reservists and, as needed, reassigning other NPFC staff. NPFC’s Standard Operating Procedures of the Claims Adjudication Division, which have not been updated since April 2004, do not yet include specific procedures required for processing claims for a spill of national significance. In particular, the procedures do not include modified practices to respond to the dramatic increase in claims filed as a result of the Deepwater Horizon incident. For the Deepwater Horizon oil spill, NPFC adopted practices involving newly developed performance indicators, past experience and continuous updates on current GCCF statistics as tools to identify the timing and extent of additional resources needed to augment its claims processing capabilities. GAO’s Standards of Internal Control in the Federal Government provide that internal control should provide for specific activities needed to help ensure management’s directives are carried out. NPFC has an opportunity to help ensure that expertise and effective practices are not lost by incorporating the lessons learned from the Deepwater Horizon incident in its guidance. Clearly documenting the policies and procedures used for the Deepwater Horizon incident would position NPFC for more effectively processing claims from any future spills of national significance by incorporating guidance, for example, on the use of performance indicators and statistics to address the size and timing of claim submissions. NPFC officials told us they are in the process of drafting an appendix for claims for spills of national significance for its individual and business claims procedures manual to document such procedures. The federal government has used a variety of approaches to oversee BP’s and GCCF’s cost reimbursement and claims processing including monitoring their activities. Soon after the Deepwater Horizon oil spill, the Deepwater Integrated Services Team (IST) was established at the direction of the National Incident Command, under the command of the U.S. Coast Guard, and initially was responsible for monitoring BP’s claims process. As Deepwater IST scaled back, its responsibilities were transitioned to relevant agencies. The oversight effort for cost reimbursement and claims activities transitioned to Justice, who continues to lead the efforts. In addition, DOI and NOAA are serving as the federal government representatives for the natural resource trustees in evaluating the environmental impact of the Deepwater Horizon incident. In order to coordinate federal agencies’ and departments’ efforts to provide support services and initially monitor claims in response to the Deepwater Horizon oil spill, the IST was established with the Federal Emergency Management Agency (FEMA) leading this effort. Figure 4 shows the IST participants. IST coordinated intergovernmental efforts to monitor BP and the GCCF claims processes to promote their efficiency and effectiveness by raising awareness and ensuring accountability and positive outcomes. It also helped raise awareness of concerns related to payment policy clarity for claimants, data access and reporting, and coordination of federal and state benefits and services to avoid duplicate payments. In conjunction with the stand-down of the National Incident Command on September 30, 2010, IST began scaling back its staffing and functions and concluded the final transition of its functions to federal agencies under the agencies existing authorities and responsibilities effective February 1, 2011. For example, Justice continues to monitor the effectiveness and efficiency of the BP and GCCF claims processes, and also leads coordination efforts to connect government stakeholders with BP and GCCF as needed. Justice has been proactive in leading federal agencies in using a range of approaches to establish practices to monitor the cost reimbursement and claims activities of BP and the GCCF. Justice encouraged BP to establish the Trust and the GCCF. Justice sent at least four letters to GCCF highlighting key concerns with the claims process. For example, in a letter dated February 4, 2011, Justice reiterated that OPA requires BP and other responsible parties to pay for damages as a result of the oil spill and to make the GCCF claims process more transparent so that claimants clearly understand the status of their claims. According to a Justice official, Justice’s involvement stems from a regulatory interest to ensure that the administration of the Trust is consistent with OPA and that claimants are treated fairly, as well as to help ensure transparency. On another related front, in order to identify non-OPA-compensable costs which the federal government incurred due to the duration, size, and location of the Deepwater Horizon oil spill, OMB issued guidance between July 2010 and January 2011 to federal agencies on identifying, documenting and reporting costs associated with the Deepwater Horizon oil spill. Specifically, OMB’s guidance directed federal agencies to include in their summary cost reports federal employee time, travel, and other related costs that were not being reimbursed through the Fund. Justice has used the information submitted by the federal agencies to identify and seek reimbursement from responsible parties for certain non- OPA-compensable costs. According to Justice officials, Justice reviewed and analyzed the information submitted by the agencies through December 31, 2010, to determine which agency costs reflected agency activities directly related to the Deepwater Horizon oil spill. After compiling this information, on May 13, 2011, Justice sent the responsible parties an invoice requesting reimbursement to the federal government for $81.6 million for the first two reporting quarters (through approximately December 2010) for other federal agency non-OPA-compensable costs. According to Justice officials, they will continue to analyze the Deepwater Horizon oil spill costs that federal agencies submit on a quarterly basis and plan to send additional requests for cost reimbursement to the responsible parties, as appropriate. Justice has also coordinated investigations of Deepwater Horizon potential fraudulent claims from individuals and businesses under review by its National Center for Disaster Fraud. As of July 28, 2011, over 3,000 referrals had been submitted for investigation from BP, GCCF and NPFC. The National Commission on BP Deepwater Horizon Oil Spill and Offshore Drilling recommended that Justice’s Office of Dispute Resolution conduct an evaluation of GCCF once all claims have been paid, in order to inform claims processes in future spills of national significance. The Commission said the evaluation should include a review of the process, the guidelines used for compensation, and the success rate for avoiding lawsuits. NPFC has also participated in monitoring the individual and business claim activities of BP and GCCF in order to determine and prepare for any potential inflows of related claims that might be coming to NPFC following any significant number of claim denials by BP or the other responsible parties. Claimants who are denied payment by the GCCF or whose claims are not settled within 90 days may pursue the following options:  appeal GCCF’s decision, if the claim is in excess of $250,000 under procedures established by the GCCF administrator;  begin litigation against the responsible parties in court; or  file a claim with NPFC. Over 900 Deepwater Horizon claims (some of which were denied by BP and GCCF) have been filed with NPFC between September 2010 and May 2011. NPFC’s claims adjudication division regularly obtains information from GCCF on GCCF claims paid and denied. This oversight information allows NPFC to determine the extent to which cases should be closed as the claimants were paid by GCCF, helps prevent claimants being paid by both GCCF and NPFC for the same claim, and enables it to better anticipate denied GCCF claims that could be resubmitted to NPFC for adjudication. The natural resource trustees for the Deepwater Horizon incident— responsible for evaluating the oil spill’s impacts on natural resources—are DOI, NOAA, DOD, and the five Gulf Coast states (Texas, Louisiana, Mississippi, Alabama, and Florida). On September 27, 2010, NOAA sent the eight responsible parties identified by DOI a Notice of Intent to Conduct Restoration Planning for the Deepwater Horizon incident on behalf of federal and state trustees. On April 21, 2011, the federal and state trustees announced that BP had agreed to provide $1 billion from the Trust for early restoration projects in the Gulf of Mexico to address natural resource damage caused by the Deepwater Horizon oil spill. Under the agreement, the $1 billion will be provided to fund projects such as the rebuilding of coastal marshes, replenishment of damaged beaches, conservation of sensitive areas for ocean habitat for injured wildlife, and restoration of barrier islands and wetlands that provide natural protection from storms. The $1 billion in early restoration projects will be selected and implemented as follows:  DOI will select and implement $100 million in projects;  NOAA will select and implement $100 million in projects;  each of the five states (Alabama, Florida, Louisiana, Mississippi, and Texas) will select and implement $100 million in projects; and  DOI and NOAA will select projects submitted by the state trustees for $300 million. Several factors contribute to financial risks that the federal government will continue to face for a number of years as a result of the Deepwater Horizon oil spill. Future uncertainties include the total expenses of fully addressing the impact of the Deepwater Horizon oil spill and the responsible parties’ and guarantors’ willingness and ability to continue to pay, possibly for the next several decades. Uncertainty over federal financial risks also arise from the per barrel oil tax expiration in 2017—the primary revenue source for the Fund—and the need for funding in response to other potential significant spills. Given these risks, it will be important for Congress to consider whether additional legislative action would help ensure that OPA’s $1 billion per-incident cap does not hinder NPFC’s ability to reimburse federal agencies’ costs, pay natural resources damages, and pay valid claims submitted by individuals and businesses. To this end, we are reiterating the Matter for Congressional Consideration in our November 2010 report that Congress should consider amending OPA, or enacting new legislation to take into account reimbursements from responsible parties in calculating an incident’s expenditures against the Fund’s $1 billion per-incident expenditure cap. For its part, NPFC has an opportunity to document and incorporate the lessons learned from its Deepwater Horizon oil spill experience in its policies and procedures to help improve its management of any future spills of national significance. Capturing lessons learned about processing such claims will be essential should a significant spill occur in the future In addition, NPFC took action to address recommendations made in our November 2010 report to ensure and maintain cost reimbursement policies and procedures and ensure responsible parties are properly notified (see app. I for the recommendations and their current status). Congress should consider the options for funding the Oil Spill Liability Trust Fund as well as the optimal level of funding to be maintained in the Fund, in light of the expiration of the Fund’s per barrel tax funding source in 2017. In order to provide guidance for responding to a spill of national significance and build on lessons learned, we recommend that the Secretary of Homeland Security direct the Director of the Coast Guard’s NPFC to finalize the revisions the Coast Guard is drafting to its Claims Adjudication Division’s Standard Operating Procedures to include specific required steps for processing claims received in the event of a spill of national significance. We provided copies of the draft report to the Departments of Homeland Security, Justice, Interior, Defense, and Commerce; Office of Management and Budget; and Environmental Protection Agency for comment prior to finalizing the report. In its written comments, reproduced in appendix VIII, the Department of Homeland Security concurred with our recommendation and stated it plans to finalize changes to operating procedures by October 31, 2011. The Departments of Homeland Security, Justice, and Interior and Environmental Protection Agency also provided technical comments that were incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees. We are also sending copies to the Secretary of Homeland Security; Director of NPFC; Attorney General of the United States; Secretary of the Interior; Secretary of Defense; Secretary of Commerce; Director of Office of Management and Budget; Administrator of the Environmental Protection Agency; and to other interested parties. This report will also be available at no charge on our website at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact Susan Ragland at (202) 512-8486 or raglands@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IX. The National Pollution Fund Center (NPFC) took actions as of September 2011 to address the four recommendations we made in our November 2010 report. This report is the third and final in a series of reports on the Deepwater Horizon oil spill in response to this request. Shortly after the explosion and subsequent sinking of BP’s leased Deepwater Horizon oil rig in the Gulf of Mexico in April 2010, we were requested to (1) identify the financial risks to the federal government and, more specifically, to the Oil Spill Liability Trust Fund (Fund) resulting from oil spills, particularly Deepwater Horizon, (2) assess NPFC’s internal controls for ensuring that processes and payments for cost reimbursements and processes for claims related to the Deepwater Horizon oil spill were appropriate, and (3) describe the extent to which the federal government oversees the BP and Gulf Coast Claims Facility (GCCF) Deepwater Horizon oil spill-related claims processes. Concerning our analysis of the financial risks and exposures to the federal government and Fund, we identified and analyzed applicable laws and regulations in order to determine statutory and regulatory limitations on the liability of responsible parties that may pose financial risks to the Fund and federal government. We also considered GAO reports on the use of the Fund, reviewed publicly available quarterly financial information of responsible parties through June 2011 to gain an understanding of the extent to which contingent liabilities are reported by these companies, and reviewed reports issued by the Congressional Research Service on responsible party liabilities under OPA. To determine the obligations and costs incurred in relation to the Fund’s $1 billion per incident cap, we obtained and analyzed daily financial summary data NPFC used related to the Deepwater Horizon oil spill. We also reviewed NPFC’s daily financial summary data to compare the amounts federal and state agencies had submitted for reimbursement from the Fund to the amounts NPFC had authorized for payment from the Fund to these government agencies through May 2011. We obtained invoices NPFC sent to the responsible parties to reimburse the Fund, analyzed the requests for reimbursements submitted by federal and state agencies, and compared the invoiced amounts to the amounts federal and state agencies had submitted for payment from the Fund. To assess NPFC’s internal controls for ensuring that agencies’ requests for cost reimbursements and claims from individuals and businesses are appropriate, we reviewed relevant sections of OPA and compared the sections to NPFC’s cost reimbursement and claims Standard Operating Procedures and to GAO’s Standards for Internal Control in the Federal Government. We interviewed cognizant NPFC officials about its cost reimbursement and claims processes, Deepwater Horizon oil spill response efforts, specific cost recovery actions under way or completed, and the NPFC division(s) responsible for those actions. We also conducted walkthroughs of the cost reimbursement and claims processes, observed NPFC’s process for generating an invoice to the responsible parties for Deepwater Horizon response costs, and conducted a site visit to the Gulf area in October 2010. For agency cost reimbursements, we tested a statistical sample of payments to federal and state agencies for their Deepwater Horizon removal and response activities paid from the Fund between April 2010 and April 2011. We interviewed NPFC’s Case Management Officer for Deepwater Horizon and other NPFC officials to gain a thorough understanding of NPFC’s cost reimbursement process. In addition, we performed walk-throughs of NPFC’s cost reimbursement and billing processes and reviewed NPFC’s Case Management’s standard operating procedures and other guidance documents. We also obtained updated information from NPFC officials about the status of the response to recommendations made in our November 2010 report. To determine our population for sampling cost reimbursements for the Deepwater Horizon oil spill, we obtained a disbursement file from U.S. Coast Guard’s Finance Center (FINCEN) which consisted of 173,458 disbursements from the Fund between April 2010 and April 2011. We reviewed the information in the file to determine whether we could rely on the data in order to select a sample and test internal controls associated with the cost reimbursement process. We assessed the reliability of the data in the file and determined it could be used to select a statistical sample for testing. From the population of 173,458 disbursements from the Fund between April 2010 and April 2011, we identified 954 disbursements for Deepwater Horizon. We then selected a random statistical sample of 57 disbursements for testing. We tested the 57 Fund disbursements for adherence to NPFC’s case management standard operating procedures. Our test included reviewing the request for reimbursement submission to  determine if a signed Pollution Removal Funding Authorization (PRFA) or Military Interdepartmental Purchase Request (MIPR) was in place between the performing federal or state agency and the Federal On-Scene Coordinator;  assess that the services or goods provided were in accordance with the terms of the PRFA statement of work or MIPR agreement; confirm evidence of supporting documentation; confirm the Federal On-Scene Coordinator’s approval of the amount requested for reimbursement by the performing federal or state agency; and confirm an NPFC Case Manager signed an Authorization to Pay or Authority to Allow Intra-Governmental Payment and Collection memorandum addressed to FINCEN authorizing payment from the Fund. For claims, we tested a statistical sample of finalized Deepwater Horizon claims presented to the Fund between September 2010 and April 2011. First, we interviewed NPFC’s Claims Division Chief, Senior Claims Manager, and other cognizant NPFC officials to gain an understanding of NPFC’s claims adjudication process. On the basis of information provided by NPFC, we identified 432 finalized claims from NPFC’s Claims Processing System submitted for the Deepwater Horizon spill between September 2010 and April 2011. From the population of 432 finalized claims, we selected a random sample of 60 claims to test. We tested the sample for adherence to OPA’s and NPFC’s claims policies and procedures. We tested NPFC’s adherence to its procedures for claim receipt, initial review, adjudication review, determination, and reconsideration. In conducting our work, we reviewed documents from individual claim files, and also used NPFC’s Claims Processing System to review the responsible party’s communication on the claims presented to the NPFC for payment. We tested to ensure that NPFC had a process for complying with OPA’s prioritization requirement that all claims be presented to the responsible party before they can be presented to the Fund. We tested to confirm that the claims were signed and submitted in writing, for a sum certain amount, and were processed by NPFC within the required statutory time frame. Because there were no payments made for claims submitted for Deepwater Horizon for our scope period, we were unable to test the payment process. Because we selected a sample of claims and cost disbursements, our results are estimates of the population and thus are subject to sample errors that are associated with samples of this size and type. Our confidence in the precision of the results from these samples is expressed in 95-percent confidence intervals. A 95-percent confidence interval is the interval that would contain the true population value in 95 percent of samples of this type and size. The results of our tests on both the sample of claims and the sample cost disbursements did not find any exceptions. On the basis of these results, we estimated that the 95-percent confidence intervals range from zero to 5 percent for both sample results and concluded with 95-percent confidence that the error rate in each population does not exceed 5 percent. We reviewed NPFC’s policies and procedures for processing and adjudicating oil spill claims and obtained information on NPFC’s claims contingency planning for handling potential surges in claims submitted related to the Deepwater Horizon oil spill. We obtained claims information from the GCCF and NPFC through May 2011 to describe the number and types of claims filed by individuals and businesses against the GCCF and the Fund, and the number and dollar amounts submitted, reviewed, and paid. We also obtained the Notices of Designation NPFC sent to responsible parties and interviewed NPFC officials about their methodology for identifying responsible parties and their procedures for notifying them. We interviewed officials at the Departments of Commerce, Defense, Interior, and Homeland Security, and the Environmental Protection Agency to obtain an understanding of these agencies’ response activities for the Deepwater Horizon oil spill and its process for billing on costs incurred. We also obtained invoices NPFC sent to the responsible parties and analyzed these billed amounts and summarized the amounts by federal and state agencies. We compared the amounts submitted for reimbursement from the Fund by the performing federal and state agencies, to the amounts billed to the responsible parties on their behalf to identify which agencies have begun their cost recovery efforts. We compared the amounts requested for reimbursement from the Fund by the performing federal and state agencies, to the amounts reimbursed from the Fund to determine the status of agency’s cost recovery efforts. To describe how the federal government oversees the BP and GCCF claims processes, we interviewed Department of Justice (Justice) officials about their oversight of BP’s claims process, the establishment of BP’s $20 billion Trust, and the setup of the GCCF. We reviewed Justice’s comments on the draft GCCF Emergency Advanced Payment and GCCF Final Payment protocols, and we obtained and reviewed the Trust agreement. We obtained and reviewed letters sent by Justice to the responsible parties discussing their financial responsibilities in connection with the Deepwater Horizon oil spill, which requested that the responsible parties provide advance notice of any significant corporate actions related to organization, structure, and financial position. We obtained and reviewed letters sent by Justice to the GCCF highlighting concerns about its pace for processing claims, need for transparency, and compliance with OPA standards. In addition, we interviewed Deepwater Integrated Services Team (IST) officials about their coordination activities regarding the BP and GCCF claims process and social services coordination efforts. The IST which was established in June 2010 and stood down in September 2010, took steps to raise awareness of concerns related to claim payment policy clarity, data access and reporting of overall claims information, and the coordination of federal/state benefits and services to avoid duplicate payments. We reviewed documentation from the Deepwater IST including its coordination plan, team updates, and transition plan. We did not evaluate the effectiveness of the monitoring and oversight efforts by Justice and the Deepwater IST. Furthermore, we reviewed publicly available claim reports from BP and GCCF for claim amounts paid, but we did not test the claims data or amounts reported by BP or GCCF. We also interviewed Office of Management and Budget and Justice officials about their role and planned actions in collecting and reviewing agency quarterly cost submissions to bill the responsible parties on behalf of the federal government. We conducted this performance audit from July 2010 to October 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. OPA provides for the payment of claims for uncompensated removal costs and certain damages caused by the discharge, or substantial threat of discharge, of oil into or upon the navigable waters of the U.S., its adjoining shorelines, or the Exclusive Economic Zone of the U.S. Adjudication and payment of claims for certain uncompensated removal costs and damages are paid out of the Principal Fund of the Fund. Order of Presentment and Time Limitation for Submitting Claims to NPFC. Claims for removal or damages may be presented first to the Fund only in the following situations: NPFC has advertised or notified claimants in writing; by a responsible party who may assert a claim; by a governor of a state for removal costs incurred by the state; and by a U.S. claimant in a case where a foreign offshore unit has discharged oil causing damage for which the Fund is liable. In all other cases where the source of the discharge can be identified, the claimant must first present their OPA claim to the responsible party for payment. If the responsible party denies the claim the claimant may submit the claim to NPFC for adjudication. Regardless of specific action to deny the claim, if the responsible party is unable or unwilling to pay the claim within 90 days of the claimant’s submission, the claimant may then submit the claim to NPFC for adjudication. If the responsible party denies a claim that is subsequently processed and payment is made from the Fund, NPFC will seek to recover these costs from the responsible party. Damage claims must be made within 3 years of when the damage and its connection to the spill were reasonably discoverable with the exercise of due care. Claims for removal costs must be made within 6 years after the date of completion of all removal actions for the incident. Designation of the Source of the Incident, Responsible Party Notification, and Advertisement. The process of designating the source of an oil discharge and notifying the responsible party frequently advances concurrently with the Federal On-Scene Coordinator’s attempt to identify the responsible party during the initial stages of spill response. In addition to the Federal On-Scene Coordinator issuing a letter of Federal Interest, the Federal On-Scene Coordinator and NPFC’s Case Management and Claims Division may decide that the potential for claims exists. Once decided, the Claim Manager is normally responsible for executing the Notice of Designation. Designation of a responsible party may also occur immediately following an on-site visit or more incrementally as information on the identity of the responsible party becomes available. Claimant Requirements. While NPFC has a form which claimants may use to submit their claim, there is no required format for submitting a claim to NPFC. However, OPA through its implementing regulations, requires that the claim be (1) submitted in writing, (2) for a sum certain amount of compensation for each category of uncompensated damages or removal, and (3) signed by the claimant. The claimant bears the burden of providing all evidence, information, and documentation deemed necessary by NPFC to support the claim. While the claim is pending against the Fund, if the claimant receives any compensation for the claimed amounts, the claimant is required to immediately amend the claim submitted to NPFC. Among other duties, the U.S. Coast Guard’s NPFC administers the Fund by disbursing funds to federal, state, local, or tribal agencies for their removal activities under the Oil Pollution Act of 1990, as amended (OPA). When an oil spill occurs, relevant federal agencies are notified by the National Response Center including the U.S. Coast Guard and the Environmental Protection Agency (EPA). The Coast Guard has responsibility and serves as the Federal On-Scene Coordinator for spills occurring in the coastal zones, while EPA has responsibility for spills that occur on land. NPFC’s Case Management Division is responsible for providing access to the Emergency Fund when a spill occurs and for working with the Federal On-Scene Coordinator and agencies to ensure accurate cost documentation to support cost recovery. NPFC’s Case Management Division operates through a matrix organization comprised of four regional case teams. Each regional case team consists of a manager and multiple case officers. When a spill occurs, NPFC assigns responsibility to the regional case team representing the geographic area in which the spill occurs. NPFC uses a three-level system to help determine the complexity of an oil spill case and its required documentation for cost reimbursement. Level I (Routine) represents about 85 percent of all oil spill incidents, in which total removal costs to the government are not expected to exceed $50,000, removal activities are localized, and removal activities can be completed within 2 weeks. For a Level I incident, agencies submit documentation to the Federal On-Scene Coordinator at the completion of removal activities. Level II (Moderately Complex) represents about 10 to 15 percent of all oil spill incidents, in which total removal costs to the government are not expected to exceed $200,000. Level II removal activities take place in multiple locations, require the involvement of several external resources (i.e., state agencies and other government units), and removal activities take longer than 2 weeks to complete. Level III (Significantly Complex) represents less than 5 percent of all oil spill incidents with total removal costs greater than $200,000. Level III removal activities take place in multiple locations, require the involvement of numerous contractors, and similar to Level II, the assistance of several external resources is needed. For both Level II and III incidents, documentation is submitted to the Federal On-Scene Coordinator as often as practical (daily if possible) until final removal activities are completed. Because the Federal On-Scene Coordinator is considered the best judge of factors regarding the oil spill, he or she is expected to select the level of documentation appropriate for the situation. The Federal On-Scene Coordinator is responsible for issuing PRFAs or MIPRs to obtain removal and logistical services from other government agencies. The PRFA commits the Fund to payment, by reimbursement, of costs incurred for agreed-upon pollution response activities undertaken by a federal agency assisting the Federal On-Scene Coordinator. The terms of a PRFA may include (1) salary costs, (2) travel and per diem expenses, (3) charges for the utilization of agency-owned equipment or facilities, and (4) expenses for contractor- or vendor-supplied goods or services obtained by the agency for removal assistance. Similarly, the Federal On- Scene Coordinator may issue a MIPR for agreed-upon activities of the DOD or its related components. In contrast to PRFAs, MIPRs (used primarily by DOD and its components) commit the Fund to reimburse costs based on valid obligations incurred for oil spill response activities prior to being incurred. For the Deepwater Horizon oil spill, NPFC’s cost reimbursement documentation requirements are the same for both MIPRs and PRFAs. Differences between PRFAs and MIPRs include that PRFAs are a reimbursement agreement and require the agency to submit documentation demonstrating services and have the Federal On-Scene Coordinator certify completion of work, prior to NPFC disbursing funds to the agency. For other than Deepwater Horizon, MIPRs allow DOD to receive the funds from NPFC prior to submitting documentation or obtaining certification of completion of work. The following are the six major steps for NPFC’s cost reimbursement process for federal, state, and local government agencies requesting payment from the Fund. 1. Federal On-Scene Coordinator issues PRFA or MIPR to government agency. 2. Government agency performs oil spill removal and response activities and submits reimbursement request to the Federal On-Scene Coordinator. 3. Federal On-Scene Coordinator reviews and certifies that services were provided by the government agency. 4. Federal On-Scene Coordinator forwards agency’s reimbursement request to NPFC for review and approval. 5. NPFC reviews agency’s reimbursement documentation and sends Authorization-to-Pay memorandum to FINCEN approving payment from the Fund. 6. FINCEN reimburses government agency for its oil spill removal costs. Costs for the containment and removal of oil from water and shorelines including contract services (such as cleanup contractors and incident management support) and the equipment used for removal. Costs for the proper disposal of recovered oil and oily debris. Costs for government personnel and temporary government employees hired for the duration of the spill response, including costs for monitoring the activities of the responsible parties. Costs for the prevention or minimization of a substantial threat of an oil spill. Federal, state, foreign, or Indian tribe trustees can claim damages for injury to, or destruction of, and loss of, or loss of use of, natural resources, including the reasonable costs of assessing the damage. Damages for injury to, or economic loses resulting from destruction of, real or personal property. Damages for loss of subsistence use of natural resources, without regard to the ownership or management of the resources. The federal, state, or local government can claim damages for the loss of taxes, royalties, rents, fees, or profits. Companies can claim damages for loss of profits or impairment of earning capacity. States and local governments can recover costs for providing increased public services during or after an oil spill response, including protection from fire, safety, or heath hazards. DHS, EPA, and the Department of Commerce inspectors general (IG) performed or are performing work related to their agency’s costs to respond to the Deepwater Horizon oil spill. The DHS IG is performing an audit to determine whether the Coast Guard has adequate policies, procedures, and controls in place to capture all direct and indirect costs associated with the Deepwater Horizon oil spill. The EPA IG is conducting work to determine if EPA has adequate controls in place to recover its Gulf Coast oil spill response costs. The Department of Commerce IG has published a review of the National Oceanic and Atmospheric Administration’s (NOAA) tracking of oil spill costs. In December 2010, the Department of Commerce IG found that while NOAA had developed processes to track the costs associated with its Deepwater Horizon oil spill activities, improvements are needed to ensure that all costs charged to oil spill projects—whether funded by appropriations or reimbursements—are properly recorded in the financial system and supported by sufficient, appropriate documentation. NOAA’s official comments emphasized the unprecedented mobilization as a result of the scope of the Deepwater Horizon oil spill, and stated that as NOAA’s participation has become more routine, its documentation of the oil spill activities has become more consistent. In addition, as NOAA evaluates its own execution of the response process, NOAA stated it will examine the observations provided by the IG. To determine the extent to which government agencies have been reimbursed from the Fund for their Deepwater Horizon response efforts, we obtained and analyzed reimbursement information from NPFC from April 2010 through May 2011. We found that the total maximum amount authorized through intergovernmental agency agreements for federal agencies’ and states’ Deepwater Horizon oil spill response costs is over $477.7 million. However, only seven federal agencies have submitted and received payment from the Fund totaling $189.4 million for their response costs; and six federal agencies that have an agreement in place authorizing them to perform work and receive reimbursement from the Fund for their response efforts, have either not yet submitted a request for reimbursement or have not provided sufficient supporting documentation for their request. (See table 4.) In addition to the contact named above, Kim McGatlin (Assistant Director); F. Abe Dymond (Assistant Director); James Ratzenberger (Assistant Director); Hannah Laufe (Assistant General Counsel); Katherine Lenane (Assistant General Counsel); Jacquelyn Hamilton (Acting Assistant General Counsel); Jehan Abdel-Gawad; James Ashley; Mark Cheung; Patrick Frey; Wilfred Holloway; Donald Holzinger; David Hooper; Mark Kaufman; Jason Kelly; Matthew Latour; Chari Nash- Cannaday; Donell Ries; and Doris Yanger made significant contributions to this report. Deepwater Horizon Oil Spill: Update on Federal Financial Risks and Claims Processing. GAO-11-397R. Washington D.C.: April 18, 2011. Deepwater Horizon Oil Spill: Preliminary Assessment of Federal Financial Risks and Cost Reimbursement and Notification Policies and Procedures. GAO-11-90R. Washington D.C.: November 12, 2010. Oil Spills: Cost of Major Spills May Impact Viability of Oil Spill Liability Trust Fund. GAO-10-795T. Washington D.C.: June 16, 2010. Maritime Transportation: Major Oil Spills Occur Infrequently, but Risks Remain. GAO-08-357T. Washington D.C.: December 18, 2007. Maritime Transportation: Major Oil Spills Occur Infrequently, but Risks to the Federal Oil Spill Fund Remain. GAO-07-1085. Washington D.C.: September 7, 2007. U.S. Coast Guard National Pollution Funds Center: Improvements Are Needed in Internal Control Over Disbursements. GAO-04-340R. Washington D.C.: January 13, 2004. U.S. Coast Guard National Pollution Funds Center: Claims Payment Process Was Functioning Effectively, but Additional Controls Are Needed to Reduce the Risk of Improper Payments. GAO-04-114R. Washington D.C.: October 3, 2003.
On April 20, 2010, an explosion of the Deepwater Horizon oil rig leased by BP America Production Company (BP) resulted in a significant oil spill. GAO was requested to (1) identify the financial risks to the federal government resulting from oil spills, particularly Deepwater Horizon, (2) assess the Coast Guard's internal controls for ensuring that processes and payments for spill-related cost reimbursements and claims related to the spill are appropriate, and (3) describe the extent to which the federal government oversees the BP and Gulf Coast Claims Facility cost reimbursement and claims processes. We issued status reports in November 2010 and April 2011. This is the third and final report related to these objectives. We obtained and analyzed data on costs incurred from April 2010 through May 2011 and claims submitted and processed from September 2010 through May 2011. We reviewed relevant policies and procedures, interviewed officials and staff at key federal departments and agencies, and tested a sample of claims processed and cost reimbursements paid for compliance with internal controls.. Both the individual circumstances of the Deepwater Horizon incident, as well as the overall framework for how the federal government responds to oil spills, present a mix of evolving, but as yet uncertain, financial risks to the federal government and its Oil Spill Liability Trust Fund (Fund). The extent of financial risks to the federal government from the Deepwater Horizon is closely tied to BP and the other responsible parties. BP established a $20 billion Trust to pay for individual and business claims and other expenses. As of May 31, 2011, BP has paid over $700 million of federal and state government costs for oil spill cleanup. Federal agency cleanup and restoration activities are under way and agencies continue to incur costs and submit them for reimbursement. However, the full extent of these costs, particularly those related to environmental cleanup, may not be fully realized for some time. As cleanup costs continue to mount, it is possible that expenditures from the Fund will reach the $1 billion total expenditure per incident cap. Expenditures were over $626 million on May 31, 2011. If these amounts reach the total expenditure cap of $1 billion, the Fund can no longer be used to make payments to reimburse agencies' costs (or to pay valid individual or business claims if not paid by the responsible parties). At that point, government agencies would no longer be able to obtain reimbursement for their costs. In November 2010, GAO suggested that Congress may want to consider setting a Fund per incident cap based on net expenditures (expenditures less reimbursement), rather than total expenditures. Finally, GAO found the federal government's longer-term ability to provide financial support in response to future oil spills is also at risk because the Fund's primary source of revenue, a tax on petroleum products, is scheduled to expire in 2017. GAO's testing of the Coast Guard's internal controls over Deepwater Horizon claims processed and cost reimbursements processed and paid showed that adjudicated claims processed and costs reimbursed were appropriate and properly documented. In November 2010, GAO made four recommendations regarding establishing and maintaining effective cost reimbursement policies and procedures for the Fund. The Coast Guard changed its operating practices to reflect lessons learned from the initial response to the Deepwater Horizon incident, and it has updated its cost reimbursement procedures accordingly. However, the Coast Guard has not yet updated its procedures for processing significant claims, so lessons learned from its experiences processing Deepwater Horizon claims could be lost. The federal government has used a variety of approaches to oversee BP's and GCCF's cost reimbursement and claims processing. Soon after the Deepwater Horizon oil spill, the federal government established a Deepwater Integrated Services Team (IST), which was initially responsible for monitoring BP's claims process, among other things. Subsequently, the oversight of cost reimbursement and claims activities transitioned to the Department of Justice, which continues to lead this and other efforts. In addition, the Department of the Interior and the National Oceanic and Atmospheric Administration are serving as the federal government's representatives for the natural resource trustees in evaluating the environmental impact of the Deepwater Horizon spill and selecting and implementing restoration projects to be funded by BP. GAO is (1) reiterating that Congress may want to consider setting a Fund cap per incident based upon net expenditures, (2) presenting a new matter concerning extending the barrel tax used to finance federal oil spill responses to sustain program funding, and (3) making a recommendation to improve procedures for future significant spills. In responding, the Department of Homeland Security concurred with the recommendation.
US-VISIT’s goals are to (1) enhance the security of U.S. citizens and visitors, (2) facilitate legitimate travel and trade, (3) ensure the integrity of the U.S. immigration system, and (4) protect the privacy of visitors. The program is to achieve these goals by collecting, maintaining, and sharing information on certain foreign nationals who enter and exit the United States; identifying foreign nationals who (1) have overstayed or violated the terms of their visit; (2) can receive, extend, or adjust their immigration status; or (3) should be apprehended or detained by law enforcement officials; detecting fraudulent travel documents, verifying visitor identity, and determining visitor admissibility through the use of biometrics (digital fingerprints and a digital photograph); and facilitating information sharing and coordination within the immigration and border management community. A series of statutes that date back more than a decade have provided a framework for developing and deploying US-VISIT entry and exit capabilities. The Illegal Immigration Reform and Immigrant Responsibility Act of 1996 (IIRIRA) required the Attorney General to develop an automated system to record the departure of every foreign national from the United States and then match it to the individual’s arrival record. Subsequently, section 2(a) of the Immigration and Naturalization Service Data Management Improvement Act (DMIA) of 2000 amended the original entry-exit provisions of IIRIRA and required the Attorney General to implement an integrated entry and exit data system for foreign nationals. More specifically, DMIA required an electronic system that would provide access to and integrate foreign national arrival and departure data that are authorized or required to be created or collected under law and are in an electronic format in Department of Justice or Department of State databases, such as those used at POEs and consular offices. The system, as described in DMIA, is to compare available arrival records with available departure records, allow online search procedures to identify foreign nationals who may have overstayed their authorized period of admission, and use available data to produce a report of arriving and departing foreign nationals. DMIA also required the implementation of the system at airports and seaports by December 31, 2003, at the 50 highest- volume land POEs by December 31, 2004, and at all remaining POEs by December 31, 2005. Subsequent laws added specific biometric requirements. The USA PATRIOT Act of 2001, as amended, required the development and certification of a technology standard by January 26, 2003, including appropriate biometric identifiers that can be used to verify the identity of persons applying for a U.S. visa or seeking to enter the United States pursuant to a visa, for the purposes of conducting background checks, confirming identity, and ensuring that a person has not received a visa under a different name. The act also required DHS and the Department of State to focus on the utilization of biometric technology and the development of tamper-resistant documents readable at POEs for the integrated entry and exit data system. The Visa Waiver Permanent Program Act required DHS to develop and implement a fully automated system to control entry and exit of aliens at airports and seaports who enter the United States under the Visa Waiver Program. The act was subsequently amended to require, not later than August 3, 2008, an exit system that uses biometric information and records every alien participating in the Visa Waiver Program that departs the United States by air. The Intelligence Reform and Terrorism Prevention Act of 2004 requires the collection of biometric exit data for all categories of individuals required to provide biometric entry data under US-VISIT, regardless of the POE where they entered the United States. The law also required DHS to develop a plan to accelerate the full implementation of the program. The Implementing Recommendations of the 9/11 Commission Act of 2007 further addressed the Visa Waiver Program by restricting DHS’s authority to admit additional countries into the Visa Waiver Program until the department, among other things, was able to certify that it could verify the departure of not less than 97 percent of foreign nationals who exit from U.S. airports and had incorporated biometric indicators (such as fingerprints) into the air exit system by June 30, 2009. US-VISIT supports a series of homeland security-related mission processes that cover hundreds of millions of foreign national travelers who enter and leave the United States at about 300 air, sea, and land POEs. These five processes are described in the next section and depicted in figure 1. Pre-entry: the process of evaluating a traveler’s eligibility for required travel documents, enrolling travelers in automated inspection programs, and prescreening travelers entering the United States. Entry: the process of determining a traveler’s admissibility into the United States at air, sea, or land POEs. Status management: the process of managing and monitoring the changes and extensions of the visits of lawfully admitted nonimmigrant foreign nationals to ensure that they adhere to the terms of their admission and that they notify appropriate government entities when they do not. Exit: the process of collecting information on travelers departing the United States. Analysis: the process of continuously screening against watch lists of individuals enrolled in US-VISIT for appropriate reporting and action. To support these processes, US-VISIT systems and equipment must exchange data with a variety of other systems, some of which are owned by other agencies. For example, US-VISIT’s Automated Biometric Identification System (IDENT) collects and stores biometric data about foreign visitors, including information from the Federal Bureau of Investigation (FBI), U.S. Immigration and Customs Enforcement information on deported felons and sexual offender registrants, and DHS information on previous criminal histories and previous IDENT enrollments. IDENT connects to a number of different systems, some of which are described here. Arrival and Departure Information System is owned by US-VISIT and stores noncitizen traveler arrival and departure biographic data received from air and sea carrier manifests. It matches entry, immigration status updates, and departure data to provide immigration status, including whether the individual has overstayed his or her authorized period of stay. Consular Consolidated Database is owned by the Department of State and includes information on visa applicants. TECS, formerly known as the Treasury Enforcement Communications System, is owned by CBP and maintains lookout (i.e., watch list) data, interfaces with other agencies’ databases, and is currently used by CBP officers at POEs to verify traveler information and update traveler data. U.S. Coast Guard’s Mona Pass Proof-of-Concept is determining the feasibility of deploying a mobile biometrics identification capability on Coast Guard cutters in the Mona Passage and in the Coast Guard’s South Florida patrol area. Integrated Automated Fingerprint Identification System is owned by FBI and is the bureau’s automated 10-fingerprint matching system and is electronically connected to all 50 states, as well as some federal agencies. The US-VISIT program has roots in a program known as Entry Exit, which was established by the former Immigration and Naturalization Service in 2002 in response to IIRIRA and other relevant legislation. Following the merger of the functions of the Immigration and Naturalization Service into DHS in 2003, the program was placed in DHS’s Border and Transportation Security Directorate and renamed US-VISIT. In 2007, US-VISIT was moved to DHS’s National Protection and Programs Directorate. DHS has delivered US-VISIT entry, and evaluated exit, capabilities in a series of increments. As a result, a biometrically enabled entry capability has been fully operational at about 300 air, sea, and land POEs since December 2006 (115 airports, 14 seaports, and 154 of 170 land ports), but an exit capability has yet to be fully deployed. Increment 1 (air and sea entry), Increment 2B (land entry), and Increment 3 (land entry) addressed the deployment of an entry capability, while Increment 1B (air and sea exit) and Increment 2C (land exit) evaluated different alternatives for collecting exit information. The timing and purpose of each increment, as well as the delivery of other significant US-VISIT capabilities, are depicted in figure 2 and described after the figure. Increments 1, 2B, and 3, which largely involved building interfaces among existing systems and enhancing the systems’ capabilities and supporting infrastructure, were delivered sequentially from January 2004 to December 2006. Specifically, in January 2004, the program office began operating most aspects of its planned biometric entry capability at 115 airports and 14 seaports for certain foreign nationals, including those from visa waiver countries (Increment 1). This capability was expanded to the 50 busiest land POEs by December 2004 (Increment 2B) and essentially deployed to 104 remaining land POEs by December 2005 (Increment 3). As of December 2006, the program office was operating this entry capability at 154 of 170 land POEs. According to DHS, US-VISIT entry operations have produced mission value. For example, as of June 2009, the program reported that it had more than 150,000 biometric hits in entry resulting in more than 8,000 people having adverse actions, such as denial of entry, taken against them. Further, about 43,000 leads were referred to the U.S. Immigration and Customs Enforcement immigration enforcement unit, resulting in 1,691 arrests. Although difficult to demonstrate, officials have also cited the possible deterrence of terrorist entry due to the program’s publicized capability to verify visitor identity at U.S. borders during entry and to match visitors against watch lists of known and suspected terrorists. In parallel with the delivery of entry capabilities, DHS examined the use of technology for recording the exit of travelers in the air, sea, and land environments. Increment 1B consisted of a series of air and sea biometric exit pilots that operated from January 2004 to May 2007 at 14 U.S. POEs. The purpose of these pilots was to evaluate three different types of technology solutions: self-service kiosk, mobile device, and a combination of the two. All three solutions involved capturing a traveler’s digital photograph and fingerprint. The pilots established the technical feasibility of a biometric exit solution at air and sea POEs. They also identified issues that limited the operational effectiveness of the solution (e.g., unacceptably low traveler compliance rates). Increment 2C, land entry/exit proof-of-concept demonstrations, operated at five ports of entry from August 2005 to November 2006. The purpose of these demonstrations was to examine the feasibility of using passive radio frequency identification (RFID) technology to record travelers’ entry and exit via a unique ID number tag embedded in the Form I-94 and to provide CBP officers in pedestrian lanes with biographic, biometric, and watch list data. The demonstrations showed that RFID technology was too immature to meet the requirements of a land exit solution. Currently, US-VISIT development and deployment efforts consist of two ongoing projects: (1) Unique Identity and (2) Comprehensive Exit. Unique Identity is to establish a single identity for all individuals encountered across the immigration and border mission area. This project consists of developing and deploying three capabilities. First, 10-print identification is to provide the means for capturing 10 fingerprints and enables the other two Unique Identity components, and increases the fingerprint matching accuracy in IDENT. DHS plans to complete 10-print deployment to all POEs in the fall of 2009. Second, enumeration is to associate the biometric and biographical data within IDENT and FBI’s fingerprint identification system with individuals encountered by immigration and border management entities. DHS reports that enumeration is being used by DHS’s U.S. Citizenship and Immigration Services. Third, IDENT interoperability with FBI’s fingerprint identification system is to enable DHS and FBI to share biometric and related biographic, criminal history, and immigration history data. DHS reports the development of this interoperability is in the second of three phases, each of which expands the types and amount of data shared between DHS and FBI, and that planning has begun for the third phase. In 2007, DHS estimated that Unique Identity would cost the department about $5.7 billion to acquire, and about $40.1 billion to operate and maintain through the year 2020. Comprehensive Exit was chartered in August 2007 to develop and deploy air and sea exit capability and to plan for a land exit solution. Project stakeholders include U.S. Immigration and Customs Enforcement, the Office of Screening Coordination and Operations, CBP, air and sea carriers, port authorities, TSA, and the U.S. Coast Guard. In April 2008, DHS issued a Notice of Proposed Rule Making to announce the intent to implement biometric exit verification at air and sea POEs. Under this notice, commercial air and sea carriers would be responsible for developing and deploying the capability to collect the biometrics from departing travelers and transmit them to DHS. According to program planning documents, US-VISIT originally planned to publish a final rule in June 2008 and for an initial capability to be deployed by December 2008. However, a final rule has yet to be published and, according to US-VISIT program officials, an official date for doing so has not been established. Subsequent to the rule making notice, the Consolidated Security, Disaster Assistance, and Continuing Appropriations Act, 2009 mandated that no US-VISIT fiscal year 2009 appropriations be used for the implementation of an air exit solution pursuant to the rule making notice until DHS reported to the Senate and House Committees on Appropriations on pilot tests that had been conducted for at least two scenarios: (1) airline collection and transmission of biometric exit data, as proposed in the rule making notice and (2) CBP collection of such information at the departure gate. Through fiscal year 2009, DHS had been appropriated about $2.5 billion for US-VISIT. As of July 2009, the program reported that about $186 million of that amount had been obligated to develop air/sea and land exit solutions since 2002. The department requested about $356 million for US-VISIT in fiscal year 2010 and was appropriated about $374 million. Since 2004, we have identified a range of management challenges and issues associated with DHS efforts to develop and deploy an exit solution. For example, we reported in May 2004 that a limited exit portion of US- VISIT had deployed to only two POEs. In February 2005, we reported that the ongoing air and sea exit pilot faced a compressed timeline, had missed milestones, and potentially was to be reduced in scope and that the changing facts and circumstances surrounding the exit pilot had introduced additional risk. In December 2006, we reported that DHS could not implement a biometric exit capability without incurring a major impact on land POE facilities. In February and August 2007, we found that DHS had not adequately defined and justified its proposed expenditures for exit pilots and demonstration projects and that it had not developed a complete schedule for biometric exit implementation. In February 2008, we reported that the Comprehensive Exit project had not been adequately defined, citing its lack of appropriate analysis to support established high-level project milestones. Accordingly, we recommended that DHS develop a plan for delivering a comprehensive exit capability that included, among other things, key milestones and performance measures. In September 2008, we further reported that DHS was unlikely to meet its timeline for implementing an air exit system with biometric indicators, such as fingerprints, by July 1, 2009, due to several unresolved issues, such as opposition to the department’s published plan by the airline industry. Most recently, in December 2008, we reported that DHS still had not developed a schedule for the full implementation of a comprehensive exit solution. In each of these reports, we made recommendations to ensure that US-VISIT exit was planned, designed, developed, and implemented in an effective and efficient manner. DHS generally agreed with our recommendations. The US-VISIT Enterprise Life Cycle Methodology (ELCM) is a framework for planning, managing, and implementing capabilities program-wide that applies to all US-VISIT program increments, task orders, mission capability enhancements, projects, components, acquisitions, and all agreements with partner/stakeholder and contractor organizations. Among other things, the ELCM provides guidance for managing related US-VISIT projects that have distinct cost, schedule, scope, and risk components, and that may be at different project phases at a given time. The ELCM consists of several process areas, such as program management, project execution, and operations and maintenance. The project execution process area includes seven subprocesses, or phases. The subprocesses are plan, which focuses on project-level planning for individual initiatives and builds on the strategic planning that occurs in the program planning process area; analyze, which includes the gathering, identification, refinement, analysis, and management of requirements; design, which includes designing the applications, technical architecture, technical infrastructure, and application training; build, which includes the development of the application, technical architecture, and technical infrastructure; test, which includes testing the components built and validating the deploy, which includes rolling out the application, technical architecture, technical infrastructure, and training to the organization; and transition, which includes ensuring that all identified transition tasks are carried out and any open issues from deployment are documented and addressed. The operations and maintenance process provides for ongoing support of a deployed system solution. A typical project will be planned, developed, and deployed during project execution and sustained as part of operations and maintenance. Within each subprocess, the ELCM specifies certain activities that are to be performed. For example, the test subprocess defines a series of nine tests that are to be conducted, including user acceptance testing, which verifies that the system meets user requirements, and operational readiness testing, which ensures the operational environment’s readiness to accept the new system. Comprehensive Exit was initiated to develop and implement a means to capture biometric information from travelers who are subject to US-VISIT as they exit the United States, and to do so in a way that integrates biometrics collection into existing exit procedures at air, sea, and land POEs and enables the matching of biometric exit and entry records to determine which travelers have left the country. According to DHS, this capability will allow the department to confirm the identity of a person leaving the country, and thereby (1) maximize investigative resources by preventing searches for travelers who have already left the country; and (2) identify overstays by country and by visa category, to better inform policy decision makers. DHS is pursuing the Comprehensive Exit project through six component efforts, each of which addresses either the air/sea or land environments: The air/sea environment is being addressed through Air/Sea Biometric Exit Release 1, Reporting Phase 1, the Air Exit Pilots, and Long-term Air/Sea Exit. The land environment is being addressed through the Temporary Worker Visa Exit Pilot and Long-term Land Exit. The two long-term components for Air/Sea and Land have yet to begin. They are to be informed or supported by the four other components. According to program officials, planning for the two long-term components is contingent upon departmental decisions that have not yet been made. DHS is employing the ELCM to manage each component. The status of each exit component relative to the ELCM project execution subprocesses is summarized in figure 3 and discussed in more detail after the figure. The purpose of Air/Sea Biometric Exit Release 1 is to modify IDENT to collect, validate, and store the biometric and biographic data for travelers who are subject to US-VISIT and exiting the United States via the air or sea environments. For example, this component allows for the biographic and biometric information provided by a departing passenger to be matched against a watch list and, if a hit is found, the passenger’s IDENT record is annotated to make the information available for any future encounters between that individual and other agencies, such as CBP, U.S. Immigration and Customs Enforcement, or local law enforcement. According to program officials, Release 1 was initiated to support the Long-term Air/Sea Exit solution, but it will also allow IDENT to process land POE exit-related data. Testing for this component is in progress, and its completion depends upon the completion of another component. Requirement validation testing of Release 1 was completed in October 2008, with all planned test cases executed. According to program officials, final testing of the release will not occur until data from the Long-term Air/Sea Exit solution are available. The purpose of Reporting Phase 1 is to enhance IDENT’s reporting capabilities in order to support the information needs of a wide range of US-VISIT users, including the analysis and evaluation of the Air Exit Pilot results. Additional phases are envisioned to deliver other US-VISIT reporting capabilities, such as text-based reporting, charts and graphs, spreadsheet downloading to authorized users’ workstations, on-demand reporting, and near real-time reporting. However, these additional phases have yet to be defined. Final testing of Phase 1 was completed in April 2009, with all planned requirements and test cases executed and five problems of low and medium severity detected. All five were addressed during final testing. Phase 1 was deployed in April 2009 and has transitioned to the operations and maintenance process area. The purpose of the Air Exit Pilots was to evaluate the impact on airport exit operations of identifying, verifying, and collecting information from passengers who were subject to US-VISIT and leaving the United States. More specifically, the pilots are to evaluate identity verification and exit-recording capabilities when used with existing POE operations and infrastructure and biometrically and biographically verify the identity, record the exit, and update the IDENT and Arrival and Departure Information System records of each subject traveler departing the United States at the pilot locations. DHS originally announced the purpose and conditions of an air exit capability in the Notice of Proposed Rulemaking published by DHS in April 2008. As noted earlier, the Consolidated Security, Disaster Assistance, and Continuing Appropriations Act, 2009 subsequently required DHS to pilot the two exit operational scenarios described in the notice: airline collection and transmission of biometric exit data and CBP collection of such information at the departure gate. DHS decided to pilot two government alternatives: passenger screening by CBP officers at the departure gate (as required by the act) and passenger screening by TSA officials at the TSA security checkpoint. DHS did not pilot the airline alternative because the airlines decided not to participate. The CBP alternative was piloted at Detroit Metropolitan Wayne County Airport and the TSA alternative at Hartsfield-Jackson Atlanta International Airport. Pilot testing at both locations was completed in May 2009, using biographic and biometric data collected from a sampling of travelers who were subject to US-VISIT. Although one system problem was found (collected fingerprint images appeared upside down and mirrored), it was corrected and all planned requirements and test cases successfully executed. The pilots began in May 2009, and they operated until July 2009, as planned. The US-VISIT Comprehensive Exit project manager told us that the pilots have been decommissioned. According to the Air Exit Pilots schedule, the only remaining activity for this component is developing and issuing the final rule for the Long-term Air/Sea Exit component. The Air Exit Pilots used two types of portable biometric collection devices: (1) a hand-held device (“mobile device”) that scanned information on travel documents and collected biometrics one fingerprint at a time and (2) a small suitcase (“portable device”) that contained a laptop computer, document scanning device, and a biometric scanner that collected a four- print slap. (See fig. 4.) The Detroit pilot used both devices. According to a TSA official, only mobile devices were used in Atlanta because of the limited space available within the checkpoint area. The pilots consisted of these four steps: Identification. For the CBP pilot, CBP officers prescreened passengers after they provided their boarding passes to airline employees to identify passengers who were subject to US-VISIT and to then direct them to a CBP processing station in the jetway. For the TSA pilot, a TSA Ticket Document Checker prescreened every passenger entering the checkpoint to identify subject passengers who were escorted to a processing station manned by Transportation Security Officers equipped with mobile devices. Collection. Both CBP and TSA officers scanned a machine-readable travel document presented by a passenger to collect biographic data. If the document did not scan correctly, the officers were instructed to enter the biographic data manually into the device. The officers then used the mobile or portable device to collect an index and middle fingerprint or a four-print image, respectively. Processing. Once the device indicated that the collected prints were of sufficient quality, the CBP and TSA officers directed the passenger to continue onto the departing aircraft or through the normal checkpoint security screening. Transmission. US-VISIT staff uploaded the information from the devices to a dedicated workstation and transmitted the data to IDENT via a secure network connection. Once transmitted, the data were matched to existing records. DHS approved a report on the pilot results in October 2009. We are statutorily required to review this report. According to program officials, planning for a target solution for air and sea POEs will begin once the pilots have been completed and after the final rule has been published. According to the US-VISIT Deputy Director, an official date for publishing the final rule has not been established. In general, program officials said that the final rule is to specify how and when an operational air/sea exit solution will be implemented. The purpose of the Temporary Worker Visa Exit Pilot is to capture the final departure of certain H2 visa temporary workers at two land border crossings. The pilot is to use kiosks adapted for outdoor use to record the exit of H-2A and H-2B visa holders who (1) previously entered and are now departing the United States through either San Luis, Arizona, or Douglas, Arizona, and (2) are required to record their final departure with CBP. In December 2008, DHS issued two Federal Register notices announcing the implementation of the pilot, one addressing H-2A visa holders and one addressing H-2B visa holders. According to the notices, the pilot was to be deployed in August 2009. However, according to the US-VISIT Comprehensive Exit Project Manager, the pilot was suspended during the testing subprocess due to lack of CBP funding. The CBP Program Manager for Admissibility and Passenger Programs told us that the pilot is now scheduled for deployment in December 2009. Both the US-VISIT program office and CBP are involved in the pilot. The program office is responsible for project management and kiosk design, development, and operations and maintenance. CBP is to support the development and deployment of the kiosks, and is to operate the pilot. As with the Air Exit Pilots, exit information collected from departing travelers is to be transmitted to IDENT, where it is to be matched against existing records. Assembly testing was completed in May 2009, with all planned requirements and test cases executed. The pilot was originally planned to run for 1 year, after which its effectiveness and feasibility as a potential part of Comprehensive Exit was to be analyzed. However, according to the CBP Program Manager for Admissibility and Passenger Programs, CBP intends to assess the pilot after 6 months of deployment to determine whether to continue it. According to US-VISIT and CBP officials, the pilot results will help inform future decisions on the pedestrian component of the Long-term Land Exit component. According to the US-VISIT Program Director and program documentation, a land exit strategy for recording biometric exit at land POEs was completed in November 2008 as planned, and is currently being reviewed by DHS leadership. The Program Director further told us that until the strategy is approved, no other Land Exit activities will be initiated. As a result, this component has yet to begin the first ELCM project execution subprocess. Given that the Comprehensive Exit project is part of the larger US-VISIT program and consists of multiple components involving several DHS component organizations, it is important for the project to be planned and executed in an integrated fashion. To this end, the US-VISIT program office has established integrated project management plans, and has adopted an integrated approach to interacting with and involving project stakeholders, both of which are important ingredients to project success. However, US-VISIT has not developed and employed an integrated approach to scheduling, executing, and tracking the work that needs to be accomplished to deliver the Comprehensive Exit solution. Rather, it is relying on several separate and distinct schedules to manage individual aspects of the project. Moreover, not all of these individual schedules are reliable because they have not been derived in accordance with relevant schedule estimating guidance. Without a Comprehensive Exit integrated master schedule that is derived in accordance with relevant guidance, the program office cannot reliably commit to when and how the work needed to deliver the Comprehensive Exit solution will be performed, and it cannot adequately manage and measure its progress in executing the work needed to deliver it. According to relevant guidance, a key to project success is a well-defined project management plan that provides a complete and integrated view of how the project is being managed. Among other things, the project management plan should (1) define or reference key project management processes, (2) be integrated with other plans that affect project management, and (3) reflect the current and complete scope of the project. The US-VISIT program has developed a plan for managing Comprehensive Exit that is largely well defined. Specifically, the project management plan calls for tailoring the ELCM framework, which defines a standard set of project management processes. Further, the program office has applied this tailored approach to individual Comprehensive Exit components (e.g, Release 1, Reporting Phase 1, and Air Exit Pilots). In addition, the project management plan is aligned with relevant US-VISIT program plans and procedures, as well as individual Comprehensive Exit component plans. For example, it incorporates by reference a number of key management processes defined in the US-VISIT program-level management plan, such as risk management, configuration management, requirements management, and schedule management. Also, it is referenced in, and aligned with, the component management plan for the Air Exit Pilots. Further, the project management plan has recently been revised, as called for in the plan, to define a more current and complete scope of the project, and to incorporate actual and planned project changes. By having a Comprehensive Exit management plan that reflects an integrated approach to project management, the US-VISIT program office has established an important means for managing project activities in a standard and consistent manner. Relevant system acquisition guidance recognizes that collaboration among relevant stakeholders is an important part of an integrated project management approach. We have reported that such collaboration can produce better results and outcomes than could be achieved when stakeholders do not act in an integrated and coordinated manner. In this regard, our research shows that effective collaborative activities involve the following practices. Establishing common outcomes: defining and articulating a shared or common outcome(s) or purpose(s) that organizations or programs are mutually seeking to achieve and that are consistent with their respective goals and missions. Establishing mutually reinforcing or joint strategies: creating strategies that work in concert with those of partner organizations or programs, or that are joint in nature. Leveraging resources: identifying the human, technological, physical, and financial resources needed to initiate or sustain the collaborative effort. Agreeing on roles and responsibilities: working together to define and agree on partners’ respective roles and responsibilities, including how the collaboration efforts will be led. Establishing a compatible means to operate across organizational boundaries: creating compatible standards, policies, procedures, and data systems that will be used in the collaborative effort. Developing mechanisms to monitor, evaluate, and report on results: putting in place the means to monitor, evaluate, and report on the collaborative effort to identify areas for improvement. As previously discussed, the Comprehensive Exit project’s pilot components involve multiple stakeholders, including the US-VISIT program office, CBP, and TSA. To their credit, these stakeholders have collaborated in a manner that is consistent with these practices. As a result, they have established the means to align their activities, processes, and resources to accomplish the objectives of the Comprehensive Exit project pilots. Within DHS, the US-VISIT program office, along with CBP and TSA, share a common mission to secure our nation’s borders. Consistent with this shared mission, these organizations have defined a common purpose for both the Air Exit Pilots and the Temporary Worker Visa Exit Pilot. Specifically, the shared purpose of the Air Exit Pilots was to evaluate the operational impact of collecting biometric exit data from travelers near the departure gate and at the TSA security checkpoint, and thereby help inform the implementation of the Air Exit solution. The shared purpose of the Temporary Worker Visa Exit Pilot is to ensure that temporary guest workers depart the United States at the completion of their work authorizations and to analyze the effectiveness and feasibility of one part of the overall Land Exit solution. The US-VISIT program office, CBP, and TSA have established joint management strategies for executing the Air Exit Pilots and the Temporary Worker Visa Exit Pilot. Specifically, an Integrated Project Team, which is led by the program office and includes representatives from CBP and TSA, was assigned responsibility for planning, execution, and control of both pilots. In addition, the program office developed an Air Exit Pilots Management Plan that defines the project management approach for implementing the Air Exit Pilots. While the program office did not establish a comparable management plan for the Temporary Worker Visa Exit Pilot, it developed a business concept of operations that documents the proposed business process and operational changes needed to implement the Temporary Worker Visa Exit Pilot. Both documents were reviewed by relevant stakeholders. As previously noted, an Integrated Project Team was assigned responsibility for planning, execution, and control of both pilots. This team has leveraged human, technological, physical, and financial resources provided by the program office, CBP, and TSA. Specifically, key personnel from each organization are members of the Integrated Project Team, and are involved in supporting the execution of the pilots. For example, CBP and TSA provided or plan to provide personnel for collecting biometrics during the pilots, and the program office provided or plans to provide on-site technical support during the pilots. In addition, the program office and CBP have funded their respective efforts, while an interagency agreement has been executed for the program office to fund TSA personnel needed for pilot operations. Also, the program office provided or plans to provide the technology (e.g., mobile and portable devices and kiosks for collecting biometrics and the IDENT system to process and store the biometric data received). Further, CBP and TSA leveraged their physical presence at the Detroit Metropolitan Wayne County Airport and the Hartsfield-Jackson Atlanta International Airport. Also, CBP is leveraging and augmenting its physical infrastructure at the San Luis and Douglas POEs in Arizona. For example, it is ensuring that proper network connectivity exists from the kiosks to IDENT and that needed electrical and facility modifications are made at the sites. The program office, CBP, and TSA have defined and agreed on roles and responsibilities for the Air Exit Pilots and the Temporary Worker Visa Exit Pilot. Specifically, the Air Exit Pilots Management Plan and business concept of operations documents define roles and responsibilities for the program office, CBP, and TSA, and these documents were reviewed or approved by all relevant parties. For example, the Air Exit Pilots Business Concept of Operations states that the program office is to evaluate and determine which biometric data collection devices will be used and provide these devices, as well as the necessary training, to CBP and TSA, while CBP and TSA are to collect the biometric exit data from travelers who were subject to US-VISIT during the pilot. Also, the Air Exit Pilots Management Plan identifies individual roles and responsibilities for key program personnel providing direct support to the project. Further, the Temporary Worker Visa Exit Pilot business concept of operations states that the program office is to serve as the overall project manager and acquire the kiosks, while CBP is to serve as the operational manager and perform the day-to-day maintenance and operation of the kiosks once they have been deployed to the sites. It also defines more detailed roles and responsibilities for specific groups within the program office and CBP, such as US-VISIT Project Management, US-VISIT Information Technology Management, CBP Office of Field Operations, and CBP Office of Information Technology. As the overall project management lead for both pilots, the program office established an Integrated Project Team that includes CBP and TSA and has aligned the pilots with the ELCM and other project management procedures to ensure they are managed consistently. For example, CBP and the program office were both involved in developing requirements for the Temporary Worker Visa Exit Pilot. As another example, when CBP officials identified a lack of CBP funding for the Temporary Worker Visa Exit Pilot, they reported this to the program office as a risk. The risk was subsequently tracked through the risk management process. As another example, CBP required a change in the kiosk solution for the Temporary Worker Visa Exit Pilot to allow it to withstand outdoor use, and submitted a change request through the established change management process to “ruggedize” the kiosks. The Comprehensive Exit project management approach includes mechanisms for monitoring, evaluating, and reporting on the results of project efforts. For example, the project management plan discusses quality assurance activities, such as peer review of project artifacts and deliverables, and testing and evaluation of hardware and software. As another example, the project management plan identifies status reporting requirements, such as quarterly program management reviews, which provide an overview of the project’s status, budget, resource levels, and any outstanding issues. In addition, the program office has applied pilot- specific mechanisms for monitoring, evaluating, and reporting on results. For example, the Air Exit Pilots Management Plan describes a five-step process improvement model for identifying, implementing, and evaluating solutions to problems during the execution of the pilots. Also, this plan establishes a stakeholder communication matrix, which documents the activities and reports for intra/inter-agency communication throughout different phases of the pilot (e.g., ongoing, predeployment, deployment, pilot operations, and disposition and analysis). Further, the program office defined performance metrics for the evaluation of the Air Exit Pilots, and it involved CBP and TSA in doing so. The success of a project depends in part on having an integrated and reliable master schedule that defines, among other things, when work activities will occur, how long they will take, and how they are related to one another. As such, the project schedule not only provides a road map for systematic project execution, but also provides the means by which to gauge progress, identify and address potential problems, and promote accountability. In addition, US-VISIT’s program and project management guidance and plans recognize that schedule management plays a critical role in the success of its activities. For example, the program management plan requires a tiered and integrated master schedule that includes contractor schedules for each task order and a project level schedule. Further, US-VISIT’s program guidance states that the integrated master schedule provides a means to ensure attainability of program objectives and evaluate the project’s progress in doing so. Program officials told us they do not have an integrated master schedule for the Comprehensive Exit project. Instead, each ongoing project component has its own separate schedule. In addition, the US-VISIT prime contractor has its own schedule to support the project components, although program officials said that the work in this schedule is manually incorporated into each component schedule. However, our analysis of the schedules for ongoing Comprehensive Exit components, as well as the contractor’s schedule, did not show any evidence of this, and the program office provided no other documentation to demonstrate that the manual incorporation exists. According to program officials, DHS cannot develop a complete schedule for the Comprehensive Exit project until decisions have been made on the direction and scope of the Air/Sea and Land exit solutions. However, relevant guidance states that a comprehensive schedule should reflect all activities for a project and recognizes that there can be uncertainties and unknown factors in schedule estimates due to, among other things, limited data. In light of such uncertainties and unknowns, the guidance discusses the need to perform a schedule risk analysis to determine the level of uncertainty and to help identify and mitigate the risks. As a result, DHS does not have a comprehensive project view of the work that must be, among other things, sequenced, timed, resourced, and risk- adjusted to deliver the Comprehensive Exit solution. Without such a view, a sound basis does not exist for knowing with any degree of confidence when and how the project will be completed. The lack of an integrated master schedule is compounded by the fact that the individual component schedules are not reliable. Our research has identified nine practices associated with developing and maintaining a reliable schedule. These practices are (1) capturing all activities, (2) sequencing all activities, (3) assigning resources to all activities, (4) establishing the duration of all activities, (5) integrating schedule activities horizontally and vertically, (6) establishing the critical path for all activities, (7) identifying float between activities, (8) conducting a schedule risk analysis, and (9) updating the schedule using logic and durations to determine the dates. In addition, the project management plan states that a project schedule should reflect the work breakdown structure for the project as well as ELCM required artifacts. The plan also requires that the project schedule be horizontally and vertically integrated, that all scheduled milestones and tasks be linked logically, and that schedule status be captured on a regular basis. Both the Air Exit Pilots schedule and the Temporary Worker Visa Exit Pilot schedule only fully meet one of the nine key schedule estimating practices, and either partially, minimally, or do not meet the remaining eight. In contrast, the prime contractor’s schedule is largely reliable, as it fully or substantially meets all nine practices. To be considered reliable, relevant guidance states that a schedule needs to fully meet all nine practices. The extent to which the two component schedules and contractor’s schedule meet the nine practices are summarized below and in table 1. A detailed discussion of the extent to which each schedule meets the nine practices is in appendix II. Component schedules: Both the Air Exit Pilots and Temporary Worker Visa Exit Pilot schedules establish the duration of time planned for executing key activities, and they detail work activities that are integrated with higher-level milestones and summary activities. However, neither schedule reflects a valid critical path due to a high number of missing dependencies and rigid schedule constraints. For example, the schedule contains 16 remaining activities that identify dates when the activities must begin. These are rigid schedule constraints and such dates remain fixed regardless of the allocation of resources or predecessor activities finishing on time, earlier, or later. This is important because the critical path represents the longest chain of activities through the network and determines the length of the project. Thus, delays in an activity that is on the critical path would cause the entire component effort to slip. Without a valid critical path, the program office cannot accurately determine the amount of time required to complete the project component and assess how delays impact the projected completion date. According to program officials, they manage each exit component to a critical path that is calculated by the scheduling software on a weekly basis. However, as noted above, the critical paths are not valid due to missing dependencies and rigid schedule constraints. In addition, neither schedule is based on a schedule risk analysis. A schedule risk analysis is important because it allows high-priority risks to be identified and mitigated, and the level of confidence in meeting projected completion dates to be predicted. Also, officials stated they do not perform regular, electronic checks on the schedules to know the true status of the components and thus ensure the integrity of the schedules’ logic. Furthermore, neither schedule assigns resources to activities, which limits insight into current or projected resource allocation issues. Without assigning resources, the risk of the projected completion date slipping is increased. Contractor schedule: The prime contractor’s schedule reflects a number of best practices. For example, this schedule can be traced to the contractor’s work breakdown structure, activities have appropriate logical sequencing, and resources are assigned to activities. In addition, contractor representatives stated they have performed a risk assessment of the schedule and regularly update the status and perform tests to ensure the integrity of schedule logic. However, the schedule does not reflect a valid critical path because it contains two separate critical paths that are not linked. By definition, the critical path must run from the first event to the last event without a break in continuity. As stated previously, without a valid critical path, the contractor cannot accurately determine the amount of time required to complete scheduled work. Without a fully integrated and reliably derived schedule for the entire Comprehensive Exit project, the program office cannot identify when and how a full exit capability will be delivered, and it cannot adequately manage and measure its progress in executing the work needed to deliver it. To DHS’s credit, it has completed or has under way five of six components that fall under the auspices of its US-VISIT Comprehensive Exit project, the status of which range from preplanning to transitioning to operations and maintenance, and it is managing some aspects of these various project components in an integrated manner. For example, each component is being governed by a defined and standardized US-VISIT project execution methodology, and each component is subject to the management processes, such as processes managing project risks. Further, those components that involve multiple organizational stakeholders are being executed to ensure that stakeholders interact in an integrated and coordinated manner. Nevertheless, if and when Comprehensive Exit will be operational remains unclear, in part because DHS still does not have an integrated master schedule defining the timing and sequencing of the work and events needed to deliver US-VISIT exit capabilities to its air, sea, and land ports of entry. Instead, it has separate schedules for managing individual components, as well as the prime contractor’s schedule that supports all the components, that do not collectively provide a road map for delivering a comprehensive exit solution, including things such as the sequencing and timing of the work needed to produce the solution, a realistic target date for doing so, and the resources associated with executing the work. Moreover, even the individual schedules governing the execution of what DHS described as unrelated components are not sufficiently reliable as standalone schedules. For the Comprehensive Exit project to be managed in a fully integrated manner, it is important for DHS to develop and implement an integrated master schedule. If it does not, it will not be able to commit to when and how the exit side of US-VISIT will become operational, and it will not have a key aspect of the means by which to get there and to measure its progress in doing so. To better ensure the successful delivery of a comprehensive US-VISIT exit solution, we are augmenting our prior recommendations aimed at strengthening Comprehensive Exit project planning. Specifically, we recommend that the Secretary of Homeland Security direct the Undersecretary for National Protection and Programs to have the US- VISIT Program Director develop and maintain an integrated master schedule for the Comprehensive Exit project in accordance with the nine practices discussed in this report. In written comments on a draft of this report, signed by the Director, Departmental GAO/Office of the Inspector General Liaison Office and reprinted in appendix III, the department stated that it concurred with our recommendation. DHS also provided technical comments, which we have incorporated into this report as appropriate. We will send copies of this report to the Chairman and Ranking Member of the Senate Committee on Homeland Security and Governmental Affairs, the Chairmen and Ranking Members of the Senate and House Appropriations Committees, and other Senate and House committees and subcommittees that have authorization and oversight responsibilities for homeland security. We will also send copies to the Secretary of Homeland Security and the Director of the Office of Management and Budget. In addition, this report will be available at no charge on the GAO Web site at www.gao.gov. Should you or your offices have any questions on matters discussed in this report, please contact me at (202) 512-3439 or at hiter@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Our objectives were to determine (1) the status of the Department of Homeland Security’s (DHS) efforts to deliver a comprehensive exit solution for the United States Visitor and Immigrant Status Indicator Technology (US-VISIT) program and (2) the extent to which DHS is applying an integrated approach to managing its comprehensive exit solution. To determine the status of efforts to deliver a comprehensive exit solution, we first identified the component efforts which constitute the Comprehensive Exit project, and then we identified the status of each relative to the phases in the US-VISIT Enterprise Life Cycle Methodology (ELCM). We reviewed key program documentation, such as the US-VISIT Comprehensive Exit Project Plan and Comprehensive Exit project documentation (e.g., concepts of operation, design documents, project schedules, requirements documentation, and test plans). In doing so, we focused on determining such key factors as what project activities were planned, when and how they were to be accomplished, and whether activities were completed as planned. We also interviewed officials from the US-VISIT program office, U.S. Customs and Border Protection (CBP), and the Transportation Security Administration (TSA) to determine how the comprehensive exit solution is being designed and implemented, and what future plans for the project have been developed. Finally, we visited the Detroit Metropolitan Wayne County Airport and the Hartsfield-Jackson Atlanta International Airport to observe the operation of the Air Exit Pilots and interviewed officials from US-VISIT (both locations), CBP (Detroit), and TSA (Atlanta) to obtain details as to how the pilots were operating. To determine the extent to which DHS is applying an integrated approach to managing the Comprehensive Exit Project, we assessed project planning, stakeholder coordination, and schedule estimation efforts against relevant best practices. Specifically, To identify the extent to which DHS is applying an integrated approach to project planning, we reviewed key project planning documentation, such as the US-VISIT Comprehensive Exit Project Plan and Air Exit Pilots Management Plan, and compared it with relevant best practices for integrated project management. To establish the extent to which DHS is applying key stakeholder coordination and collaboration practices to the Comprehensive Exit project, we reviewed key project planning documentation (e.g., Comprehensive Exit Project Plan, Air Exit Pilots Management Plan, concepts of operation, and project tailoring plans) and compared it with relevant best practices. To determine the extent to which DHS is applying key schedule estimating practices to the Exit Project, we reviewed schedule estimates for ongoing exit work (Air Exit Pilots schedule, Temporary Worker Visa Exit Pilot schedule, contractor schedule) and compared them with relevant best practices. In doing so, we categorized our determinations as either met, substantially, partially, minimally, and not met. Our determinations were also based on interviews with knowledgeable US-VISIT, CBP, and TSA officials. We conducted this performance audit at the US-VISIT Program Office in Arlington, Virginia; CBP headquarters offices in Washington, D.C.; TSA headquarters offices in Arlington, Virginia; Detroit Metropolitan Wayne County Airport in Detroit, Michigan; and Hartsfield-Jackson Atlanta International Airport in Atlanta, Georgia, from January 2009 to November 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Our research has identified nine practices associated with effective schedule estimating: (1) capturing all activities, (2) sequencing all activities, (3) assigning resources to all activities, (4) establishing the duration of all activities, (5) integrating schedule activities horizontally and vertically, (6) establishing the critical path for all activities, (7) identifying float between activities, (8) conducting a schedule risk analysis, and (9) updating the schedule using logic and durations to determine the dates. For the Comprehensive Exit project, we analyzed schedules representing ongoing work, which included the Air Exit Pilots component schedule, the Temporary Worker Visa Exit Pilot component schedule, and the prime contractor schedule, against the nine best practices. Tables 2, 3, and 4 provide the detailed results of our analyses of these schedules. In addition to the individual named above, Paula Moore, Assistant Director; Justin Booth; Neil Doherty; Rebecca Eyler; Nancy Glover; Richard Hagerman; Dave Hinchman; Jason Lee; Karen Richey; and Jeanne Sung made key contributions to this report.
The Department of Homeland Security's (DHS) U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT) program stores and processes biometric and biographic information to, among other things, control and monitor the entry and exit of foreign visitors. Currently, an entry capability is operating at almost 300 U.S. ports of entry, but an exit capability is not. The Government Accountability Office (GAO) has previously reported on limitations in DHS's efforts to plan and execute its efforts to deliver US-VISIT exit, and made recommendations to improve these areas. GAO was asked to determine (1) the status of DHS's efforts to deliver a comprehensive exit solution and (2) to what extent DHS is applying an integrated approach to managing its comprehensive exit solution. To accomplish this, GAO assessed US-VISIT exit project plans, schedules, and other management documentation against relevant criteria, and it observed exit pilots. DHS has established a Comprehensive Exit project within its US-VISIT program that consists of six components that are at varying stages of completion. To DHS's credit, the US-VISIT program office has established integrated project management plans for, and has adopted an integrated approach to, interacting with and involving stakeholders in its Comprehensive Exit project. However, it has not adopted an integrated approach to scheduling, executing, and tracking the work that needs to be accomplished to deliver a comprehensive exit solution. Rather, it is relying on several separate and distinct schedules to manage individual components and the US-VISIT prime contractor's work that supports these components. Moreover, neither of the two component schedules that GAO reviewed are reliable because they have not been derived in accordance with relevant guidance. Specifically, both the Air Exit Pilots schedule and the Temporary Worker Visa Exit Pilot schedule only fully meet one of nine key schedule estimating practices, and either partially, minimally, or do not meet the remaining eight. In contrast, the prime contractor's schedule is largely reliable, as it fully or substantially meets all nine practices. Without a master schedule for the Comprehensive Exit project that is integrated and derived in accordance with relevant guidance, DHS cannot reliably commit to when and how the work will be accomplished to deliver a comprehensive exit solution to its almost 300 ports of entry, and it cannot adequately monitor and manage its progress toward this end.
The Centers for Disease Control and Prevention (CDC) in the Department of Health and Human Services is the federal agency primarily responsible for monitoring the incidence of foodborne illness in the United States. In collaboration with state and local health departments and other federal agencies, CDC investigates outbreaks of foodborne illnesses and supports disease surveillance, research, prevention efforts, and training related to foodborne illnesses. CDC coordinates its activities concerning the safety of the food supply with the Food and Drug Administration (FDA), which is also in the Department of Health and Human Services. With respect to the safety of meat, poultry, and eggs, CDC coordinates with the Food Safety and Inspection Service (FSIS) in the U.S. Department of Agriculture (USDA). CDC monitors individual cases of illness from harmful bacteria, viruses, chemicals, and parasites (hereafter referred to collectively as pathogens) that are known to be transmitted by foods, as well as foodborne outbreaks, through voluntary reports from state and local health departments, FDA, and FSIS. In practice, because CDC does not have the authority to require states to report data on foodborne illnesses, each state determines which diseases it will report to CDC. In addition, state laboratories voluntarily report the number of positive test results for several diseases that CDC has chosen to monitor. However, these reports do not identify the source of infection and are not limited to cases of foodborne illness. CDC also investigates a limited number of more severe or unusual outbreaks when state authorities request assistance. At least 30 pathogens are associated with foodborne illnesses. For reporting purposes, CDC categorizes the causes of outbreaks of foodborne illnesses as bacterial, chemical, viral, parasitic, or unknown pathogens. Although many people associate foodborne illnesses primarily with meat, poultry, eggs, and seafood products, many other foods—including milk, cheese, ice cream, orange and apple juices, cantaloupes, and vegetables—have also been involved in outbreaks during the last decade. Bacterial pathogens are the most commonly identified cause of outbreaks of foodborne illnesses. Bacterial pathogens can be easily transmitted and can multiply rapidly in food, making them difficult to control. CDC has targeted four of them—E. coli O157:H7, Salmonella Enteritidis, Listeria monocytogenes, and Campylobacter jejuni—as being of greatest concern. The existing data on foodborne illnesses have weaknesses and may not fully depict the extent of the problem. In particular, public health experts believe that the majority of cases of foodborne illness are not reported because the initial symptoms of most foodborne illnesses are not severe enough to warrant medical attention, the medical facility or state does not report such cases, or the illness is not recognized as foodborne. However, according to the best available estimates, based largely on CDC’s data, millions of people become sick from contaminated food each year, and several thousand die. In addition, public health and food safety officials believe that the risk of foodborne illnesses is increasing for several reasons. Between 6.5 million and 81 million cases of foodborne illness and as many as 9,100 related deaths occur each year, according to the estimates provided by several studies conducted over the past 10 years. The wide range in the estimated number of foodborne illnesses and related deaths is due primarily to the considerable uncertainty about the number of cases that are never reported to CDC. For example, CDC officials believe that many intestinal illnesses that are commonly referred to as the stomach flu are caused by foodborne pathogens. People do not usually associate these illnesses with food because the onset of symptoms occurs 2 or more days after the contaminated food was eaten. Furthermore, most physicians and health professionals treat patients who have diarrhea without ever identifying the specific cause of the illness. In severe or persistent cases, a laboratory test may be ordered to identify the responsible pathogen. Finally, physicians may not associate the symptoms they observe with a pathogen that they are required to report to the state or local health authorities. For example, a CDC official cited a Nevada outbreak in which no illnesses from E. coli O157:H7 had been reported to health officials, despite a requirement that physicians report such cases to the state health department. Nevertheless, 58 illnesses from this outbreak were subsequently identified. In the absence of more complete reporting, researchers can only broadly estimate the number of illnesses and related deaths. Food safety and public health officials believe that several factors are contributing to an increased risk of foodborne illnesses. First, the food supply is changing in ways that can promote foodborne illnesses. For example, as a result of modern animal husbandry techniques, such as crowding a large number of animals together, the pathogens that can cause foodborne illnesses in humans can spread throughout the herd. Also, because of broad distribution, contaminated food products can reach more people in more locations. Subsequent mishandling can further compound the problem. For example, leaving perishable food at room temperature increases the likelihood of bacterial growth and undercooking reduces the likelihood that bacteria will be killed. Knowledgeable experts believe that although illnesses and deaths often result from improper handling and preparation, the pathogens were, in many cases, already present at the processing stage. Second, because of demographic changes, more people are at greater risk of contracting a foodborne illness. In particular, certain populations are at greater risk for these illnesses: people with suppressed immune systems, children in group settings like daycare, and the elderly. Third, three of the four pathogens CDC considers the most important were unrecognized as causes of foodborne illness 20 years ago—Campylobacter, Listeria, and E. coli O157:H7. Fourth, bacteria already recognized as sources of foodborne illnesses have found new modes of transmission. While many illnesses from E. coli O157:H7 occur from eating insufficiently cooked hamburger, these bacteria have also been found more recently in other foods, such as salami, raw milk, apple cider, and lettuce. Fifth, some pathogens are far more resistant than expected to long-standing food-processing and storage techniques previously believed to provide some protection against the growth of bacteria. For example, some bacterial pathogens (such as Yersinia and Listeria) can continue to grow in food under refrigeration. Finally, according to CDC officials, virulent strains of well-known bacteria have continued to emerge. For example, one such pathogen, E. coli O104:H21, is another potentially deadly strain of E. coli. In 1994, CDC found this new strain in milk from a Montana dairy. While foodborne illnesses are often temporary, they can also result in more serious illnesses requiring hospitalization, long-term disability, and death. Although the overall cost of foodborne illnesses is not known, two recent USDA estimates place some of the costs in the range of $5.6 billion to more than $22 billion per year. The first estimate, covering only the portion related to the medical costs and productivity losses of seven specific pathogens, places the costs in the range of $5.6 billion to $9.4 billion. The second, covering only the value of avoiding deaths from five specific pathogens, places the costs in the range of $6.6 billion to $22 billion. Although often mild, foodborne illnesses can lead to more serious illnesses and death. For example, in a small percentage of cases, foodborne infections can spread through the bloodstream to other organs, resulting in serious long-term disability or death. Serious complications can also result when diarrhetic infections resulting from foodborne pathogens act as a triggering mechanism in susceptible individuals, causing an illness such as reactive arthritis to flare up. In other cases, no immediate symptoms may appear, but serious consequences may eventually develop. The likelihood of serious complications is unknown, but some experts estimate that about 2 to 3 percent of all cases of foodborne illness lead to serious consequences. For example: E. coli O157:H7 can cause kidney failure in young children and infants and is most commonly transmitted to humans through the consumption of undercooked ground beef. The largest reported outbreak in North America occurred in 1993 and affected over 700 people, including many children who ate undercooked hamburgers at a fast food restaurant chain. Fifty-five patients, including four children who died, developed a severe disease, Hemolytic Uremic Syndrome, which is characterized by kidney failure. Salmonella can lead to reactive arthritis, serious infections, and deaths. In recent years, outbreaks have been caused by the consumption of many different foods of animal origin, including beef, poultry, eggs, milk and dairy products, and pork. The largest outbreak, occurring in the Chicago area in 1985, involved over 16,000 laboratory-confirmed cases and an estimated 200,000 total cases. Some of these cases resulted in reactive arthritis. For example, one institution that treated 565 patients from this outbreak confirmed that 13 patients had developed reactive arthritis after consuming contaminated milk. In addition, 14 deaths may have been associated with this outbreak. Listeria can cause meningitis and stillbirths and is fatal in 20 to 40 percent of cases. All foods may contain these bacteria, particularly poultry and dairy products. Illnesses from this pathogen occur mostly in single cases rather than in outbreaks. The largest outbreak in North America occurred in 1985 in Los Angeles, largely in pregnant women and their fetuses. More than 140 cases of illness were reported, including at least 13 cases of meningitis. At least 48 deaths, including 20 stillbirths or miscarriages, were attributed to the outbreak. Soft cheese produced in a contaminated factory was confirmed as the source. Campylobacter may be the most common precipitating factor for Guillain-Barre syndrome, which is now one of the leading causes of paralysis from disease in the United States. Campylobacter infections occur in all age groups, with the greatest incidence in children under 1 year of age. The vast majority of cases occur individually, primarily from poultry, not during outbreaks. Researchers estimate that 4,250 cases of Guillain-Barre syndrome occur each year and that about 425 to 1,275 of these cases are preceded by Campylobacter infections. While the overall annual cost of foodborne illnesses is unknown, the studies we reviewed estimate that it is in the billions of dollars. The range of estimates among the studies is wide, however, principally because of uncertainty about the number of cases of foodborne illness and related deaths. Other differences stem from the differences in the analytical approach used to prepare the estimate. Some economists attempt to estimate the costs related to medical treatment and lost wages (the cost-of-illness method); others attempt to estimate the value of reducing the incidence of illness or loss of life (the willingness-to-pay method). Two recent estimates demonstrate these differences in analytical approach. In the first, USDA’s Economic Research Service (ERS) used the cost-of-illness approach to estimate that the 1993 medical costs and losses in productivity resulting from seven major foodborne pathogens ranged between $5.6 billion and $9.4 billion. Of these costs, $2.3 billion to $4.3 billion were the estimated medical costs for the treatment of acute and chronic illnesses, and $3.3 billion to $5.1 billion were the productivity losses from the long-term effects of foodborne illnesses. CDC, FDA, and ERS economists stated that these estimates may be low for several reasons. First, the cost-of-illness approach generates low values for reducing health risks to children and the elderly because these groups have low earnings and hence low productivity losses. Second, this approach does not recognize the value that individuals may place on (and pay for) feeling healthy, avoiding pain, or using their free time. In addition, not all of the 30 pathogens associated with foodborne illnesses were included. In the second analysis, ERS used the willingness-to-pay method to estimate the value of preventing deaths for five of the seven major pathogens (included in the first analysis) at $6.6 billion to $22 billion in 1992. The estimate’s range reflected the range in the estimated number of deaths, 1,646 to 3,144, and the range in the estimated value of preventing a death, $4 million to $7 million. Although these estimated values were higher than those resulting from the first approach, they may have also understated the economic cost of foodborne illnesses because they did not include an estimate of the value of preventing nonfatal illnesses and included only five of the seven major pathogens examined in the first analysis. The federal food safety system has evolved over the years as changes were made to address specific health threats and respond to new technological developments. Often such changes occurred in reaction to a major outbreak of foodborne illness when consumers, industry, regulatory agencies, and the Congress agreed that actions needed to be taken. The system has been slow to respond to changing health risks, for a variety of reasons, including a lack of comprehensive data on the levels of risk and the sources of contamination. While current data indicate that the risk of foodborne illnesses is significant, public health and food safety officials believe that these data do not identify the level of risk, the sources of contamination, and the populations most at risk in sufficient detail. According to these experts, the current voluntary reporting system does not provide sufficient data on the prevalence and sources of foodborne illnesses. There are no specific national requirements for reporting on foodborne pathogens. According to CDC, states do not (1) report on all pathogens of concern, (2) usually identify whether food was the source of the illness, or (3) identify many of the outbreaks or individual cases of foodborne illness that occur. Consequently, according to CDC, FDA, and FSIS, public health officials cannot precisely determine the level of risk from known pathogens or be certain that they can detect the existence and spread of new pathogens in a timely manner. They also cannot identify all factors that put the public at risk or all types of food or situations in which microbial contamination is likely to occur. Finally, without better data, regulators cannot assess the effectiveness of their efforts to control the level of pathogens in food. More uniform and comprehensive data on the number and causes of foodborne illnesses could form the basis of more effective control strategies. A better system for monitoring the extent of foodborne illnesses would actively seek out specific cases and would include outreach to physicians and clinical laboratories. CDC demonstrated the effectiveness of such an outreach effort when it conducted a long-term study, initiated in 1986, to determine the number of cases of illness caused by Listeria. This study showed that a lower rate of illness caused by Listeria occurred between 1989 and 1993 during the implementation of food safety programs designed to reduce the prevalence of Listeria in food. In July 1995, CDC, FDA, and FSIS began a comprehensive effort to track the major bacterial pathogens that cause foodborne illnesses. These agencies are collaborating with the state health departments in five areas across the country to better determine the incidence of infection with Salmonella, E. coli O157:H7, and other foodborne bacteria and to identify the sources of diarrheal illness from Salmonella and E. coli O157:H7. Initially, FDA provided $378,000 and FSIS provided $500,000 through CDC to the five locations for 6 months. For fiscal year 1996, FSIS is providing $1 million and FDA is providing $300,000. CDC provides overall management and coordination and facilitates the development of technical expertise at the sites through its established relationships with the state health departments. CDC and the five sites will use the information to identify emerging foodborne pathogens and monitor the incidence of foodborne illness. FSIS will use the data to evaluate the effectiveness of new food safety programs and regulations to reduce foodborne pathogens in meat and poultry and assist in future program development. FDA will use the data to evaluate its efforts to reduce foodborne pathogens in seafood, dairy products, fruit, and vegetables. The agencies believe that this effort should be a permanent part of a sound public health system. According to CDC, FDA, and FSIS officials, such projects must collect data over a number of years to identify national trends and evaluate the effectiveness of strategies to control pathogens in food. Funding was decreased (on an annualized basis) for this project in 1996, and these officials are concerned about the continuing availability of funding, in this era of budget constraints, to conduct this discretionary effort over the longer term. While providing more comprehensive data would help federal food safety officials develop better control strategies, it would not address the structural problems that adversely affect the federal food safety system. As we previously testified to this Committee, the current system was not developed under any rational plan but evolved over many years to address specific health threats from particular food products and has not responded to changing health risks. As a result, the food safety system is a patchwork of inconsistent approaches that weaken its effectiveness. For example, as we reported in June 1992, food products posing the same risk are subject to different rules, limited inspection resources are inefficiently used, and agencies must engage in extensive and often unsuccessful coordination activities in an attempt to address food safety issues. While federal agencies have made progress in moving towards a scientific, risk-based inspection system, foods posing similar health risks, such as seafood, meat, and poultry, are still treated differently because of underlying differences in regulatory approach. For example, FDA’s hazard analysis critical control point (HACCP) requirement for seafood processors differs from FSIS’ proposed HACCP program for meat and poultry processors. Under FSIS’ proposal, meat and poultry plants would be required to conduct microbiological tests to verify the overall effectiveness of their critical controls and processing systems. In comparison, FDA’s HACCP program for seafood products has no testing requirement. Furthermore, because the frequency of inspection is based on the agencies’ regulatory approach, some foods may be receiving too much attention, while other foods may not be receiving enough. FSIS will conduct oversight of industries that use HACCP programs on a daily basis and will continue to inspect every meat and poultry carcass. Conversely, FDA will inspect seafood plants about once every 2 years and will only inspect other food plants under its jurisdiction an average of about once every 8 years. As we stated in our June 1992 report, such widely differing inspection frequencies for products posing similar risk is an inefficient use of limited federal inspection resources. Moreover, federal agencies are often slow to address emerging food safety concerns because of fragmented jurisdictions and responsibilities. For example, in April 1992, we reported that jurisdictional questions, disagreement about corrective actions, and poor coordination between FDA and USDA had hindered the federal government’s efforts to control Salmonella in eggs for over 5 years. At that time, we stated that the continuing nature of such problems indicated that the food safety structure—with federal agencies having split and concurrent jurisdictions—had a systemic problem. The system’s fragmented structure limited the government’s ability to deal effectively with a major outbreak of foodborne disease, especially when such an outbreak required joint agency action. Today, federal agencies are concerned with the potential impact on public health posed by Bovine Spongiform Encephalopathy (the so-called mad cow disease), which was the subject of your May 10, 1996, hearing. Because there is still no single, uniform food safety system, jurisdiction remains split between agencies. Ironically, FSIS is responsible for the safety of meat products sold to the public, but is not responsible for preventing cattle from being given feeds that could endanger public health. FDA is responsible. Mr. Chairman, this concludes my prepared remarks, we would be happy to respond to any questions you may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed foodborne pathogens and their impact on public health. GAO noted that: (1) millions of illnesses and thousands of deaths result annually from contaminated foods; (2) the actual incidence of foodborne illnesses is unknown because most cases go unreported; (3) public health officials believe that the risk of foodborne illnesses has increased over the last 20 years because of food production changes, broader distribution, food mishandling, demographic changes, and new and more resistant bacteria; (4) the Department of Agriculture estimates that the costs of foodborne illnesses range from $5.6 billion to over $22 billion per year; (5) foodborne illnesses can also cause long-term disabilities, such as reactive arthritis and paralysis; (6) states are not required to report all foodborne illnesses or their causes; (7) more uniform and comprehensive data on the number and causes of foodborne illnesses could lead to the development of more effective control strategies, but federal officials are not sure they can continue to fund such data collection efforts if budget cuts continue; (8) federal agencies often do not address emerging food safety concerns because there are different rules for foods posing the same risks and limited inspection resources; and (9) unsuccessful coordination of food safety activities results from agencies' fragmented responsibilities.
Bank holding companies are companies that own or control one or more banks. In the United States, most banks insured by FDIC are owned or controlled by a bank holding company. In addition to bank subsidiaries engaged in traditional banking activities of deposit-taking and lending, many U.S. bank holding companies also own or control nonbank subsidiaries, such as broker-dealers and insurance companies. The Bank Holding Company Act of 1956, as amended, establishes the legal framework under which bank holding companies operate and establishes their supervision, with the Federal Reserve Board having authority over bank holding companies and their banking and nonbanking interests. The Bank Holding Company Act also limits the types of activities that bank holding companies may conduct, either directly or through their nonbank affiliates. The restrictions, which are designed to maintain the general separation of banking and commerce in the United States, allow bank holding companies to engage only in banking activities and those activities that the Federal Reserve Board has determined to be “closely related to banking,” such as extending credit, servicing loans, and performing appraisals of real estate and tangible and intangible personal property, including securities. Under amendments to the Bank Holding Company Act made by the Gramm-Leach-Bliley Act, also known as the Financial Services Modernization Act of 1999, a bank holding company can elect to become a financial holding company that can engage in a broader range of activities that are financial in nature. The Gramm- Leach-Bliley Act defined a set of activities as financial in nature and authorized the Federal Reserve Board, with the agreement of Treasury, to determine whether an additional activity is financial in nature or incidental or complementary to a financial activity. For example, financial holding companies are permitted to engage in securities underwriting and dealing, but would be prohibited, for example, from selling commercial products. Large U.S. bank holding companies typically are registered as financial holding companies and own a number of domestic bank subsidiaries, as well as nonbank and foreign subsidiaries. The largest U.S. bank holding companies have grown substantially in size and scope in recent decades. Since 1990, in part due to waves of mergers, the share of total bank holding company assets controlled by the largest 10 firms increased from less than 30 percent to more than 60 percent, as of July 2012. Some bank holding companies grew to become large financial conglomerates that offer a wide range of products that cut across the traditional financial sectors of banking, securities, and insurance. Following the enactment of the Gramm-Leach-Bliley Act in 1999, the assets held in nonbank subsidiaries or at the holding company level grew to account for a progressively larger share of total bank holding company assets. Greater involvement by bank holding companies in nontraditional banking businesses has been accompanied by an increase in the proportion of bank holding company income that is generated by fee income, trading, and other noninterest activities. As large bank holding companies have broadened the scope of their activities and their geographic reach, they have become more organizationally complex. A simple indicator of organizational complexity is the number of separate legal entities within the bank holding company; the largest four U.S. bank holding companies each had at least 2,000 as of June 30, 2013. The 2007-2009 financial crisis raised concerns that some U.S. bank holding companies—as well as some nonbank financial institutions—had grown so large, interconnected, and leveraged that their failure could threaten the stability of the U.S. financial system and the global economy. The Dodd-Frank Act includes several provisions intended to reduce the risk of a failure of a large, complex financial institution, the damage that such a failure could do to the economy, and the likelihood that a failing institution would receive government support. For example, the act directs the Federal Reserve Board to impose enhanced prudential standards and oversight on bank holding companies with $50 billion or more in total consolidated assets and nonbank financial companies designated by the Financial Stability Oversight Council (FSOC) for supervision by the Federal Reserve Board. The federal government maintains programs—frequently referred to as safety nets—to reduce the vulnerability of depository institutions to runs that could threaten the health of the banking system and the broader economy. Following a series of banking crises in the early 1900s, the government created two programs generally considered to form the core of these safety nets: the Federal Reserve System’s discount window and FDIC deposit insurance. By making emergency liquidity available to solvent depository institutions through the discount window and reducing incentives for depositors to withdraw their funds, these safety nets were intended to help ensure that depository institutions could continue to lend and provide other important services, even during turbulent economic conditions. In addition to the discount window and deposit insurance, the Federal Reserve Board and FDIC have other emergency authorities related to maintaining financial stability. Moreover, the Federal Home Loan Bank System serves to provide liquidity to the banking system that helps to foster stability. In part because access to federal safety nets potentially reduced incentives for insured depositors to monitor and restrain the risk-taking of banks, banks were also subjected to federal supervision and regulation. The Federal Reserve System, in its role as the lender of last resort, operates discount window programs, which provide a backup source of liquidity through collateralized loans for depository institutions to help ensure the stable flow of credit to households and businesses. During normal market conditions, banks and other depository institutions in generally sound financial condition can obtain discount window loans to address short-term funding needs arising from unexpected funding pressures. In a financial crisis, discount window lending can provide broader liquidity support to the banking system that can help mitigate strains in financial markets. The Federal Reserve Board authorizes the Reserve Banks to offer three discount window programs to depository institutions: primary credit, secondary credit, and seasonal credit, each with its own terms. The primary credit program is the principal discount window program and extends credit to depository institutions in generally sound condition on a very short-term basis (usually overnight). The secondary credit program is available to extend credit on a very short- term basis for depository institutions that are not eligible for primary credit, with the purpose of helping institutions to return to market sources of funds. The seasonal credit program generally extends loans to small depository institutions that face seasonal fluctuations in their funding needs. Section 10B of the Federal Reserve Act provides the statutory framework for these programs and, among other things, requires all discount window loans to be secured to the satisfaction of the lending Reserve Bank. FDIC deposit insurance covers deposit accounts—including checking and savings accounts, money market deposit accounts, and certificates of deposit—at insured depository institutions up to the insurance limit and is backed by the full faith and credit of the U.S. government. Federal deposit insurance was created to reduce the incentive for depositors to withdraw funds from banks during a financial panic and maintain stability and confidence in the nation’s banking system. During the 1800s and early 1900s, a number of states adopted different versions of deposit insurance to insure bank obligations in response to a wave of bank failures. However, these state insurance funds were later unable to cope with economic events during the 1920s, which led to calls for a system of federal deposit insurance to maintain financial stability. The Banking Act of 1933, which created FDIC by an amendment to the Federal Reserve Act, authorized FDIC to provide deposit insurance to banks and went into effect on January 1, 1934. The deposit insurance fund, administered by FDIC to resolve failed banks and thrifts, protects depositors from losses due to institution failures up to a limit. The deposit insurance fund is primarily funded by fees from assessments on insured depository institutions. If necessary, FDIC can borrow from Treasury, the Federal Financing Bank, and the Federal Home Loan Banks. As discussed later in this report, the Dodd-Frank Act permanently increased the deposit insurance limit from $100,000 to $250,000 and changed the base used to determine an insured depository institution’s risk-based assessment to be paid into the deposit insurance fund. In addition to the discount window and deposit insurance, during the 2007-2009 financial crisis the Federal Reserve Board and FDIC used their emergency authorities to assist individual failing institutions. As discussed later in this report, the Dodd-Frank Act changed these authorities so that emergency lending can no longer be provided to assist a single and specific firm but rather can only be made available through a program with broad-based eligibility—that is, a program that provides funding support to institutions that meet program requirements and choose to participate. Federal Reserve emergency lending authority. Prior to the Dodd- Frank Act, emergency lending authority under Section 13(3) of the Federal Reserve Act permitted the Federal Reserve Board, in unusual and exigent circumstances, to authorize a Reserve Bank to extend credit to individuals, partnerships, or corporations, if the Reserve Bank determined that adequate credit was not available from other banking institutions, and if the extension of credit was secured to the satisfaction of the lending Reserve Bank. During the financial crisis of 2007-2009, the Federal Reserve Board invoked this authority on a number of occasions to authorize one or more Reserve Banks to provide emergency assistance to particular institutions or to establish new programs to provide liquidity support to important credit markets. FDIC open bank assistance. The FDIC Improvement Act of 1991 included a systemic risk exception to the requirement that FDIC resolve failed banks using the least costly method. Under this exception, FDIC could provide assistance to a failing bank if compliance with its requirements to resolve the bank using the least costly approach would have “serious adverse effects on economic conditions and financial stability”—that is, would cause systemic risk—and if such assistance would “avoid or mitigate such adverse effects.” FDIC could act under the exception only under a process that included recommendations from the FDIC Board of Directors and Federal Reserve Board and approval by the Treasury Secretary. The agencies invoked this authority during the crisis to authorize FDIC to provide guarantees to particular banks and to introduce new guarantee programs with broad-based eligibility. As discussed later in this report, the Dodd-Frank Act effectively removed FDIC’s authority to provide assistance to failing banks outside of a receivership. The Federal Home Loan Bank (FHLB) System also serves to provide funding support to depository institutions during normal and strained market conditions. The FHLB System is a government-sponsored enterprise (GSE) that consists of 12 Federal Home Loan Banks (FHLB) and is cooperatively owned by member financial institutions, which include banks, thrifts, insurance companies, and credit unions. The primary mission of the FHLB System is to promote housing and community development by making loans, known as advances, to member financial institutions. These institutions are required to secure FHLB advances with high-quality collateral (such as single-family mortgages) and may use FHLB advances to fund mortgages. To raise the funds necessary to carry out its activities, the FHLB System issues debt in the capital markets at favorable rates compared to commercial borrowings due to market perceptions that the federal government would intervene to support the FHLB System in a crisis, thereby reducing its risk of default. When credit markets become strained, as they did during the most recent crisis, the FHLB System can serve as an important backup source of liquidity for member institutions that meet the FHLBs’ collateral and other requirements. The 2007-2009 financial crisis was the most severe that the United States has experienced since the Great Depression. The dramatic decline in the U.S. housing market that began in 2006 precipitated a decline in the price of financial assets that were associated with housing, particularly mortgage-related assets based on subprime loans. Some institutions found themselves so exposed to declines in the values of these assets that they were threatened with failure—and some failed—because they were unable to raise the necessary capital as the value of their lending and securities portfolios declined. Uncertainty about the financial condition and solvency of financial entities led banks to dramatically raise the interest rates they charged each other for funds and, in late 2008, interbank lending effectively came to a halt. The same uncertainty also led money market funds, pension funds, hedge funds, and other entities that provide funds to financial institutions to raise their interest rates, shorten their terms, and tighten credit standards. As their funding became increasingly difficult to obtain, financial institutions responded by raising the prices and tightening their credit standards for lending to households and nonfinancial businesses. The liquidity and credit crisis made the financing on which businesses and individuals depend increasingly difficult to obtain as cash-strapped banks tightened underwriting standards, resulting in a contraction of credit to the economy. By late summer of 2008, the potential ramifications of the financial crisis included the continued failure of financial institutions, increased losses of individual wealth, reduced corporate investments, and further tightening of credit that would exacerbate the emerging global economic slowdown that was beginning to take shape. Because financial crises can result in severe damage to the economy and the road to recovery can be long, governments and monetary authorities have historically undertaken interventions, even though some of the resulting actions raise concerns about moral hazard and can pose a risk of losses to taxpayers. Given its severity and systemic nature, the recent global financial crisis prompted substantial interventions starting in late 2007, after problems in the subprime mortgage market intensified. As discussed further in the next section of this report, these interventions included the creation of temporary government programs to support important credit markets and financial institutions that intermediate credit in the economy by channeling funds from savers to borrowers. From 2007 through 2009, the federal government’s actions to stabilize the financial system provided funding support and other benefits to bank holding companies and their bank and nonbank financial subsidiaries. The Federal Reserve Board, Treasury, and FDIC introduced new programs with broad-based eligibility that provided funding support to institutions that met program requirements and chose to participate. Selected programs—for which eligibility was not restricted exclusively to institutions that were part of a bank holding company—included Federal Reserve System lending programs, Treasury capital investment programs, and FDIC programs that guaranteed uninsured deposits and new debt issues. Isolating the impact of individual interventions is difficult, but collectively these actions likely improved financial conditions by enhancing confidence in financial institutions and the financial system. Bank holding companies and their subsidiaries also accrued benefits specific to their own institutions, including liquidity benefits from programs that allowed them to borrow at lower interest rates and at longer maturities than might have been available in the markets. Programs generally were made available to institutions of various sizes, and differences in the use of programs by institutions of various sizes were driven in part by differences in how institutions funded themselves. For example, compared to smaller bank holding companies, larger bank holding companies relied less on deposits as a source of funding and more on short-term credit markets and participated more in programs created to stabilize these markets. At the end of 2008, use of these programs—measured for each institution as the percentage of total assets supported by the programs—was larger on average for larger banking organizations—those with $50 billion or more in total assets— than for smaller banking organizations. The six largest bank holding companies were significant participants in several emergency programs but exited most of the programs by the end of 2009. Differences in program use across banking organizations of various sizes diminished as institutions exited the programs. In addition to programs that provided broad-based support, the Federal Reserve Board granted a number of regulatory exemptions to allow banks to provide liquidity support to their nonbank affiliates and for other purposes. Finally, some large bank holding companies benefitted from individual institution assistance or regulatory relief. For example, government assistance to prevent the failures of large institutions benefited recipients of this assistance and other market participants. During the financial crisis, the Federal Reserve System, Treasury, and FDIC introduced new programs with broad-based eligibility to provide general funding support to the financial sector and to stabilize the financial system. Given this report’s focus on bank holding companies, this section focuses on the financial stability programs that provided the most significant funding support directly to bank holding companies or their bank or nonbank subsidiaries. Table 1 provides an overview of the size, purpose, terms, and conditions of these programs, which included: the Federal Reserve System’s Term Auction Facility (TAF); Primary Dealer Credit Facility (PDCF); Term Securities Lending Facility (TSLF); and Commercial Paper Funding Facility (CPFF); Treasury’s Capital Purchase Program (CPP); and FDIC’s Temporary Liquidity Guarantee Program (TLGP), which had two components: the Debt Guarantee Program (DGP) guaranteed certain newly issued senior unsecured debt, and the Transaction Account Guarantee Program (TAGP) guaranteed certain previously uninsured deposits. Institutions eligible for these programs included both entities that were part of a bank holding company structure and entities that were not. The Federal Reserve System designed its emergency programs to address disruptions to particular credit markets and to assist participants in these markets. For example, the Federal Reserve System’s programs that targeted support to repurchase agreement markets provided assistance to securities firms that were subsidiaries of bank holding companies and securities firms that were not. The Federal Reserve System’s CPFF purchased commercial paper from participating bank holding companies and other financial and nonfinancial firms that met the program’s eligibility requirements. Treasury’s CPP and FDIC’s TLGP provided support primarily to insured depository institutions (banks and thrifts) and bank and savings and loan holding companies. Bank holding companies also benefited from other government programs, such as programs that targeted support to other market participants. For example, in the absence of Treasury and Federal Reserve System programs to guarantee and support money market mutual funds, respectively, such funds may have reduced their purchases of money market instruments issued by subsidiaries of bank holding companies and other firms, thereby exacerbating funding pressures on these firms. Other significant government programs included the Term Asset-Backed Securities Loan Facility (TALF), which was created by the Federal Reserve System to support certain securitization markets, and other programs created by Treasury under TARP authority. While the Federal Reserve System and FDIC provided expanded support through traditional safety net programs for insured banks during the crisis, some of the emergency government programs provided funding support at the bank holding company level—where it could be used to support both bank and nonbank subsidiaries—or directly to nonbank entities. In late 2007, the Federal Reserve Board took a series of actions to ease strains in interbank funding markets, including lowering the target federal funds rate, easing terms at the discount window, and introducing a new program—TAF—to auction term loans to banks. However, in part due to statutory and regulatory restrictions on the ability of insured banks to provide funding support to their nonbank affiliates, agencies determined that emergency government support to insured banks was not sufficient to stem disruptions to important credit markets. Nonbank credit markets— such as repurchase agreement and debt securities markets—had grown to rival the traditional banking sector in facilitating loans to consumers and businesses, and agencies determined that actions to address disruptions to these markets were needed to avert adverse impacts to the broader economy. For example, in March 2008, the Federal Reserve Board authorized PDCF and TSLF to address strains in repurchase agreement markets by providing emergency loans to broker-dealers, a few of whom were owned by U.S. bank holding companies. When the crisis intensified in September 2008 following the failure of Lehman Brothers Holdings Inc.—a large broker-dealer holding company—the Federal Reserve Board modified terms for its existing programs and took other actions to expand funding support for both bank and nonbank entities. In September 2008, Treasury and the Federal Reserve System introduced new temporary programs to address liquidity pressures on money market funds and to help ensure that these funds could continue to purchase money market instruments issued by bank holding companies and other firms. In addition, in October 2008, Congress enacted legislation under which Treasury provided capital investments to banks, bank holding companies, and other institutions; the legislation also temporarily increased FDIC’s deposit insurance limit from $100,000 to $250,000. Also that month, the Federal Reserve System created CPFF to support commercial paper markets, and FDIC introduced TLGP, under which it guaranteed previously uninsured transaction accounts and certain newly issued senior unsecured debt for participating insured depository institutions, bank and savings and loan holding companies, and approved affiliates of insured depository institutions. For a more detailed discussion of the circumstances surrounding the creation of these programs, see appendix II. Isolating the impact of individual government interventions is difficult, but collectively these interventions helped to improve financial conditions by enhancing confidence in financial institutions and the financial system overall. Bank holding companies and their subsidiaries, in addition to the financial sector and the economy as a whole, benefited from improved financial conditions. Bank holding companies and their subsidiaries also experienced individual benefits from participating in particular programs. Individually and collectively, government lending, guarantee, and capital programs provided important liquidity and other benefits to bank holding companies and their subsidiaries including: Access to funding in quantities and/or at prices that were generally not available in the markets. Government entities generally sought to set prices for assistance through these programs to be less expensive than prices available during crisis conditions but more expensive than prices available during normal market conditions. In some credit markets assisted by government programs—such as commercial paper and repurchase agreement markets—conditions had deteriorated such that many institutions faced substantially reduced access to these markets or had no access at all. As discussed below, we compared program pricing to relevant indicators of market pricing where available and found that emergency lending and guarantee programs generally were priced below market alternatives that may have been available. The availability of funding support at this pricing in predictable quantities was also beneficial. Even at times when eligible institutions did not access the available programs, these programs diversified the sources of funds that could be available to them if they faced increased funding pressures. Access to funding at longer maturities. By providing and standing ready to provide funding support for terms of 1 month or longer, government programs helped to reduce rollover risk—the risk that an institution would be unable to renew or “rollover” funding obligations as they came due—for individual institutions and their counterparties. At times during the crisis, bank holding companies and their subsidiaries faced difficulties borrowing at terms of 1 month or longer in several important credit markets, including interbank, repurchase agreement, and commercial paper markets. Government programs mitigated funding pressures for borrowers in these markets by reducing the risk that funding sources would rapidly disappear for an institution or its counterparties. Because participants in these programs were also lenders of funds, these programs helped to encourage these institutions to continue to lend funds to support the economy. Stabilizing deposit funding. FDIC’s TAGP, which temporarily insured certain previously uninsured deposits for a fee, helped to stabilize deposit funding by removing the risk of loss from deposit accounts that were commonly used to meet payroll and other business transaction purposes and allowing banks, particularly smaller ones, to retain these accounts. Deposits are the primary source of funding for most banks, and smaller banks tend to fund themselves to a greater extent with deposits. Funding support for a broad range of collateral types. A few Federal Reserve System programs provided important liquidity benefits to individual institutions and credit markets by allowing institutions to obtain liquidity against a broad range of collateral types. TAF provided 1-month and 3-month loans to eligible banks against collateral types that could also be used to secure discount window loans. While TAF collateral requirements were based on discount window requirements, TAF provided emergency credit on a much larger scale, with TAF loans outstanding peaking at nearly $500 billion, compared to peak primary credit outstanding during the crisis of just over $100 billion. In March 2008, the Federal Reserve System began providing liquidity support to certain nonbank financial firms—the primary dealers—for less liquid collateral types through PDCF and TSLF. Through PDCF, the Federal Reserve Bank of New York (FRBNY) allowed primary dealers to obtain overnight cash loans against harder-to-value collateral types, such as mortgage-backed securities. Through TSLF, FRBNY auctioned loans of Treasury securities to primary dealers in exchange for less-liquid collateral types to increase the amount of high-quality collateral these dealers had available to borrow against in repurchase agreement markets. When pressures in repurchase agreement markets intensified in September 2008, the Federal Reserve Board expanded the types of collateral it accepted for both PDCF and TSLF. Although imperfect, one indicator of the extent to which an institution directly benefited from participation in an emergency program is the relative price of estimated market alternatives to the program. To determine how pricing of the emergency assistance compared to market rates, we compared pricing for programs to the pricing for market alternatives that might have been available to program participants. First, we compared the interest rates and fees charged by the Federal Reserve System and FDIC for participation in the emergency lending and guarantee programs with available market alternatives. We considered a number of potential indicators of market interest rates available to financial institutions, including a survey of interbank interest rates (the London Interbank Offered Rate or LIBOR), commercial paper interest rates published by the Federal Reserve Board, spreads on bank credit default swaps (CDS) and interest rates on repurchase agreements. These interest rates provide a general indication of market alternatives that could have been available to participants, but for a number of reasons the rates are unlikely to reflect available alternatives for all participants at all points in time during the crisis and cannot be used to produce a precise quantification of the benefits that accrued to participating financial institutions. For example, participants’ access to market alternatives may have been limited, data on the relevant private market may be limited, or market alternatives could vary across participants in ways that we do not observe in the data. The markets targeted by emergency programs had experienced significant strains, such as a substantial drop in liquidity, a sharp increase in prices, or lenders restricting access only to the most credit worthy borrowers or accepting only the safest collateral. Also, our indicators do not capture all of the benefits associated with participation in the relevant programs. Furthermore, once programs were introduced, they probably influenced the price of market alternatives, making it difficult to interpret differences between emergency program and market prices while programs were active. Second, to determine the extent to which Treasury capital investment programs were priced more generously than market alternatives, we reviewed estimates of the expected budget cost associated with equity funding support programs as well as a valuation analysis commissioned by the Congressional Oversight Panel. For more details on our methodology for these analyses, see appendix III. Based on our analysis, we found that emergency assistance provided through these programs was often priced below estimated market alternatives that might have been available to program participants. This result is consistent with a policy goal of these programs to stabilize financial markets and restore confidence in the financial sector. The pricing of emergency assistance below estimated market alternatives is also evidenced by the significant participation in these programs. Specifically, we found that emergency lending and guarantee programs were generally priced below certain indicators of market alternatives that could have been available. In addition, based on analyses we reviewed, Treasury paid prices above estimated market prices for emergency equity support programs. For selected programs that we analyzed, we also found that program pricing would likely have become unattractive in comparison to market pricing during normal and more stable credit conditions. Federal Reserve System programs. Federal Reserve System emergency lending programs during the crisis provided sources of both secured and unsecured funding at rates that were often below those of potential market alternatives and at terms that reduced rollover risk for participants. These characteristics are consistent with a policy goal to stabilize financial conditions by providing funding support for financial institutions that relied on wholesale funding markets. At the time, the markets targeted by the Federal Reserve emergency programs had experienced strains, such as a drop in volume or a significant increase in prices or collateral standards. TAF. Interest rates on TAF loans, on average, were between 22 and 39 basis points lower than three market interest rates that could have represented alternatives for participants. TAF auctioned collateralized loans—generally at terms of either 28 or 84 days—to insured banks to help alleviate strains in term funding markets. We compared interest rates for 28-day TAF loans with 1-month LIBOR, 30-day asset-backed commercial paper (ABCP) rates, and interest rates on very large 1- month unsecured certificates of deposit. We chose these interest rates because they are all indicators of the cost of borrowing for financial institutions in term funding markets. However each differs from TAF in important ways. For example, LIBOR is based on unsecured loans (TAF loans were secured by collateral) and ABCP, despite being secured, has other features that differ from TAF, including the mix of underlying collateral. We found that LIBOR exceeded TAF interest rates by an average of 22 basis points. ABCP interest rates exceeded TAF interest rates by on average 39 basis points and interest rates on very large certificates of deposit exceeded TAF interest rates by on average 29 basis points while the program was active. Because of differences between TAF and these measures of market interest rates, these spreads are an imperfect measure of the extent to which banks derived benefits from participating in TAF. PDCF. Our analysis suggests that PDCF provided secured overnight funding on more favorable terms for some types of collateral (such as corporate debt) than market alternatives that some primary dealers might have relied upon in the absence of PDCF. Because PDCF operated in a similar manner to repurchase agreement markets, we compared PDCF terms to available data for triparty and bilateral repurchase agreement transactions. One important term for repurchase agreement loans is the haircut, which is the amount of additional collateral the lender requires over the value of the loan. Repurchase agreement lenders generally require higher haircuts on riskier and less liquid collateral types. PDCF offered loans at the same interest rate (the discount rate charged on discount window loans) for all collateral types and applied a haircut schedule that assigned progressively higher haircuts to riskier assets. We compared PDCF haircuts to market haircuts for selected asset classes in the triparty repurchase agreement market. We found that the haircut required by PDCF was consistently greater than the median haircut in the triparty repurchase agreement market for comparable asset classes. Thus, borrowers who faced the median haircut on their collateral in the triparty market were better off borrowing in the triparty market than through PDCF, all else being equal. However, the PDCF haircut was smaller than the 75th percentile haircut in the triparty market for a variety of collateral types. This implies that higher-risk borrowers were better off borrowing through PDCF than through the triparty market, at least for certain types of collateral. Smaller haircuts would have allowed these PDCF participants to borrow more against the same collateral than in private repurchase agreement markets. TSLF. TSLF allowed primary dealers to obtain funding for the most commonly pledged collateral types at 32 basis points below an estimated market alternative. When TSLF was created in March 2008, repurchase agreement lenders were requiring higher interest rates and haircuts for loans against a range of less-liquid collateral types and were reluctant to lend against mortgage-related securities. Through TSLF, primary dealers paid an auction-determined interest rate to exchange harder-to-finance collateral for more liquid Treasury securities—which were easier to borrow against in repurchase agreement markets—generally for a term of 28 days. TSLF held separate auctions of Treasury securities against two different schedules of collateral to apply a higher interest rate to riskier collateral. Schedule 1 collateral included higher quality assets, such as agency debt and agency mortgage-backed securities (MBS), and Schedule 2 collateral included Schedule 1 collateral and a broader range of asset types, such as highly-rated private-label MBS. We compared TSLF interest rates to the difference between lower interest rates primary dealers might have paid on repurchase agreements secured by Treasury securities and the higher interest rates they could have paid on repurchase agreements secured by TSLF-eligible collateral. Due to limited availability of interest rate data for repurchase agreements collateralized by other lower-quality collateral eligible for TSLF, such as private-label MBS, we compared TSLF interest rates to the difference or spread between interest rates on repurchase agreements collateralized by agency MBS and repurchase agreements collateralized by Treasury securities. We found that the spread between repurchase agreement interest rates on agency MBS (the most commonly-pledged collateral for TSLF) and Treasury securities exceeded TSLF interest rates by on average 32 basis points while the program was active. CPFF. CPFF purchased 3-month commercial paper at prices that were lower than market rates during the crisis on instruments that could have represented alternative funding sources but were more expensive than average commercial paper rates during normal market conditions. CPFF controlled for changes in short-term interest rates by setting the price of commercial paper issuance to CPFF at a fixed spread above the daily 3-month overnight indexed swap rate, a rate that tracks investor expectations about the future federal funds rate. Table 2 summarizes the pricing structure for CPFF. We compared all- in borrowing costs (an interest rate plus a credit surcharge for unsecured borrowing) for CPFF borrowers with 3-month LIBOR. To determine how CPFF pricing compared to borrowing costs in crisis conditions, we compared CPFF pricing terms to 3-month LIBOR for the period from the failure of Lehman Brothers Holdings Inc. (Sept. 14, 2008) through the date on which CPFF became operational (Oct. 27, 2008). We found that average CPFF pricing terms were lower than the average LIBOR rate by 92 basis points and 44 basis points for CPFF purchases of unsecured commercial paper and collateralized ABCP, respectively. To determine how unsecured CPFF rates compared to benchmarks for borrowing costs in normal market conditions, we applied the CPFF pricing rule for unsecured commercial paper to a 2-month period in 2006 and found that CPFF pricing would have been more expensive than AA unsecured commercial paper interest rates by roughly 200 basis points and LIBOR by over 190 basis points. This analysis suggests that CPFF would have become less attractive to participants as market conditions improved. Treasury capital investments. Analyses we reviewed suggest that the prices Treasury paid for equity in financial institutions participating in TARP exceeded estimated market prices that private investors might have paid for comparable investments in these institutions during the crisis. This pricing is consistent with a policy goal to stabilize financial conditions by improving the equity capitalization of banks. In late 2008, before CPP was announced, banks had difficulty issuing sufficient new equity to investors. We reviewed estimates of the expected budget cost associated with Treasury’s equity funding support programs under TARP, CPP and the Targeted Investment Program (TIP), as well as a valuation analysis commissioned by the Congressional Oversight Panel. Some of the benefits that accrued to banks from participation in equity funding support programs are likely to be proportional to the expected budgetary cost (also known as subsidy rates) estimated for accounting purposes. Treasury and Congressional Budget Office estimates of subsidy rates are based on a net present value analysis—the price and terms which are offered by a federal agency are compared to the lifetime expected cost (net present value) of the equity, and the difference is known as a subsidy. The valuation analysis commissioned by the Congressional Oversight Panel explicitly compared the prices received by Treasury with market-based valuations of securities it determined to be comparable. Estimates of subsidy rates by Treasury, the Congressional Budget Office, and the Congressional Oversight Panel were generally similar for CPP, while the Congressional Budget Office’s estimates for TIP were substantially lower than those of Treasury and the Congressional Oversight Panel (see fig. 1). Based on these three analyses, these estimated subsidy rates suggest that the prices Treasury paid for equity in financial institutions were 18 to 27 percent over estimated market prices for CPP and 26 to 50 percent over estimated market prices for TIP equity. Estimates reflect differences in timing, methodology, and institutions included in the analyses, which we discussed previously and in the note to figure 1. FDIC’s DGP. For the DGP guarantees that we analyzed, the fees for FDIC’s DGP were on average 278 basis points below the private cost of similar guarantees during crisis conditions, but more expensive than similar guarantees that were available in the private market during normal credit conditions. This pricing is consistent with a policy goal to promote financial stability by improving access to sources of debt funding. FDIC’s DGP provided guarantees for certain newly issued senior unsecured debt for banks, bank holding companies, and other eligible institutions. When DGP was created in October 2008, lending to financial institutions in public debt markets had dropped dramatically. The fees for participation in DGP were based on the maturity of guaranteed liabilities (the longer the maturity the higher the fee) and the type of financial institution. We analyzed the 100-basis point fee that DGP charged to guarantee debt with a maturity of 1 year, plus the 10-basis point premium charged to bank holding companies. We compared the total DGP fee with the weighted average price of 1-year bank CDS for certain bank holding companies because the guarantee is essentially similar to a private party insuring against the risk of default using a CDS. Our analysis covered the period from the failure of Lehman Brothers (in September 2008) through the date DGP became operational (in October 2008). We found that the cost of insuring against bank default on the private market exceeded the FDIC fee terms by on average 278 basis points, with considerable variation across users—varying from over 1,000 basis points above the DGP fee terms to a few basis points below. We also applied the DGP pricing rule for guaranteeing bank holding company debt to a 2-month period in 2006, before the crisis, and found that DGP pricing would have exceeded the private cost of guarantees by roughly 100 basis points. This pricing suggests that DGP would have become less attractive to participants as market conditions improved. For more detail on our analysis of the prices and terms of all of the emergency programs, please see appendix III. Emergency government programs to stabilize financial markets provided funding support to bank holding companies and insured depository institutions (collectively, banking organizations) of various sizes. This section also focuses on the programs that provided the most significant funding support directly to bank holding companies and their subsidiaries (listed previously in table 1). Agencies made these programs available to specific types of institutions regardless of their size, and institutions of various sizes participated in these programs. Differences in the level of program use by institutions of various sizes were driven in part by differences in how institutions funded themselves. For example, compared to smaller bank holding companies, larger bank holding companies relied to a greater extent on short-term credit markets that were the most severely disrupted during the crisis and participated more in programs intended to address disruptions in these markets. Smaller banking organizations relied more on deposits to fund their activities. To compare the extent to which banking organizations of various sizes used emergency programs, we calculated the percentage of banking organization assets that were supported by emergency programs—either through capital injections, loans, or guarantees—at quarter-end dates for 2008 through 2012. Capital provided by emergency programs includes capital investments by Treasury under CPP and TIP. Loans provided by emergency programs include TAF, TSLF, PDCF, and CPFF loans from the Federal Reserve System. Funding guaranteed by emergency programs includes deposits guaranteed by FDIC through TAGP and debt guaranteed by FDIC through DGP. We then calculated each of these three types of liabilities as a percentage of assets for banking organizations by size for quarter-end dates from mid-2008 to the end of 2012. Finally, for each of the three types of liabilities, we decomposed average liabilities as a percentage of assets for banking organizations of different sizes into two components: (1) the rate of participation in emergency programs by banking organizations of different sizes and (2) the average liabilities as a percentage of assets for those participants. We found that the extent to which banking organizations of different sizes used emergency programs varied over time and across programs. For example, the largest bank holding companies—those with more than $500 billion in assets as of June 30, 2013—used the programs to varying degrees but had exited most of the programs by the end of 2009. Moreover, as of December 31, 2008, average use of emergency programs generally was higher for banking organizations with $50 billion or more in assets than it was for banking organizations with less than $50 billion in assets. Total loans outstanding from Federal Reserve System programs (TAF, TSLF, PDCF, and CPFF) combined were at least 2 percent of assets on average for banking organizations with $50 billion or more in assets but less than 1 percent of assets on average for smaller banking organizations. CPP and TIP capital investments were at least 1.5 percent of assets on average for banking organizations with $50 billion or more in assets and less than 1 percent of assets on average for smaller organizations. Finally, DGP-guaranteed debt and TAGP-guaranteed deposits together were at least 6 percent of assets on average for banking organizations with $50 billion or more in assets and were less than 4 percent of assets on average for smaller banking organizations. However, by December 31, 2010, the Federal Reserve System’s loan programs had closed, and differences in use of remaining programs by banking organizations of different sizes had diminished. For a more detailed discussion of our analysis of utilization of these programs by banking organizations of various sizes, see appendix IV. Several factors influenced the extent to which eligible institutions used emergency programs. As explained above, one factor driving an institution’s level of participation in a program was the extent to which it relied on the type of funding assisted by the program. In addition, market conditions and the speed with which eligible firms recovered affected the amount and duration of use of the programs by different firms. Agencies generally designed program terms and conditions to make the programs attractive only for institutions facing liquidity strains. Use of several of the programs peaked during the height of the financial crisis and fell as market conditions recovered. Federal Reserve Board officials told us that even as markets recovered, funding conditions improved for certain borrowers but not others. As a result, in PDCF, TSLF, and CPFF, several participants remained in the programs while others exited. Participants in CPP required the approval of their primary federal regulator before exiting the program. In addition, several of the programs included limits on the amount of assistance an entity could receive. Under CPP, qualified financial institutions were eligible to receive an investment of between 1 and 3 percent of their risk-weighted assets, up to a maximum of $25 billion. To prevent excessive use of CPFF that would be inconsistent with its role as a backstop, the Federal Reserve Board limited the maximum amount a single issuer could have outstanding at CPFF to the greatest amount of U.S.-dollar-denominated commercial paper the issuer had had outstanding on any day between January 1 and August 31, 2008. The Federal Reserve Board also set limits on the maximum amount that institutions could bid in each TAF and TSLF auction. Finally, in some cases, institutions accepted emergency government assistance at the encouragement of their regulators. For example, several institutions accepted TARP capital investments at the encouragement of Treasury or their regulator. However, participation in other programs appears to have been driven by market conditions and other factors. During the financial crisis, the Federal Reserve Board granted a number of exemptions to requirements under Section 23A of the Federal Reserve Act for a range of purposes, such as allowing banks to provide greater liquidity support to the nonbank sector. The number of exemptions granted increased significantly during the crisis, and the majority of these exemptions were granted to U.S. bank holding companies and other firms with $500 billion or more in total assets (see fig. 2). Section 23A of the Federal Reserve Act imposes quantitative limits on certain transactions between an insured depository institution and its affiliates, prohibits banks from purchasing low-quality assets from their nonbank affiliates, and imposes collateral requirements on extensions of credit to affiliates. In letters documenting its approval of exemptions to Section 23A, the Federal Reserve Board has indicated that the twin purposes of Section 23A are (1) to protect against a depository institution suffering losses in transactions with its affiliates, and (2) to limit the ability of a depository institution to transfer to its affiliates the subsidy arising from the institution’s access to the federal safety net. In other words, these restrictions are intended to protect the safety and soundness of banks and to prevent them from subsidizing the activities of nonbank affiliates by passing on any benefits they may receive through access to deposit insurance and the discount window. The Federal Reserve Act granted the Federal Reserve Board authority to exempt transactions and relationships from Section 23A restrictions if such exemptions were in the public interest and consistent with statutory purposes. Prior to the Dodd-Frank Act, the Federal Reserve Board had exclusive authority to grant exemptions to Section 23A. During the financial crisis, the Federal Reserve Board granted a number of exemptions from the requirements of Section 23A, for a range of purposes that included, but were not limited to, the following: Facilitating Liquidity Support for Holders of Mortgage-Related Assets. In August 2007, the Federal Reserve Board issued three similar exemption letters granting Section 23A exemptions to three of the largest U.S. bank holding companies (Citigroup Inc., Bank of America Corporation, and JP Morgan Chase & Co.) to allow their bank subsidiaries (Citibank, N.A.; Bank of America, N.A.; and JPMorgan Chase Bank, N.A.) to engage in securities financing transactions with their affiliated broker-dealers. The purpose of these exemptions was to allow each of these banks to extend up to $25 billion of credit (using their broker-dealer affiliates as conduits) to unaffiliated market participants in need of short-term liquidity to finance their holdings of certain mortgage loans and other assets. The Federal Reserve Board’s letters noted that these exemptions would provide significant public benefits by allowing banks to provide a substantial amount of liquidity into the market for these mortgage-related assets. Facilitating Liquidity Support for Holders of Auction-Rate Securities. In December 2008 and January 2009, the Federal Reserve Board granted exemptions to allow four large banks (Fifth Third Bank, BB&T Company, Northern Trust Company, and Wachovia Bank, N.A.) to purchase auction-rate securities and variable rate demand notes from their securities affiliates or parent company. The Federal Reserve Board’s letters noted that these exemptions were intended to facilitate the provision of liquidity by these banks to customers of their affiliates that were holding illiquid auction-rate securities or variable rate demand notes. The securities affiliates of banks had been active in underwriting and selling auction-rate securities and when these securities became illiquid, the affiliates repurchased them from clients that sought to liquidate their positions. In this case, 23A exemptions allowed banks to provide financing for these purchases. The size of transactions permitted under these exemptions ranged from $600 million for The Northern Trust Company to approximately $7 billion for Wachovia Bank, N.A. Facilitating Liquidity Support to Money Market Funds and Repurchase Agreement Markets. In addition to exemptions granted to individual institutions, the Federal Reserve Board granted broad-based exemptions from Section 23A to enable banks to provide liquidity support to repurchase agreement markets and money market mutual funds (MMMF). First, on September 14, 2008, concurrent with the decision to expand eligible collateral types for PDCF and TSLF, the Federal Reserve Board adopted an interim final rule granting a temporary exemption to allow banks to provide their securities affiliates with short-term financing for assets that they ordinarily would have financed through the repurchase agreements markets. The purpose of this exemption was to improve the ability of broker-dealers to continue financing their securities and other assets despite the liquidity shortage in the triparty repurchase agreement market. Several days later, on September 19, the Federal Reserve Board amended Regulation W to grant a temporary exemption from Section 23A requirements for member banks’ purchases of ABCP from affiliated money market funds, subject to certain conditions. The purpose of this exemption was to enable banks to take full advantage of the Asset-Backed Commercial Paper Money Market Mutual Fund Liquidity Facility (AMLF), a program authorized by the Federal Reserve Board to provide loans to banks to fund the purchase of ABCP from MMMFs. Facilitating Acquisitions of Failing Firms. The Federal Reserve Board also granted Section 23A exemptions in connection with its efforts to facilitate private acquisitions of firms whose failure could have destabilized financial markets. Such acquisitions included JP Morgan Chase & Co.’s acquisition of Bear Stearns and Wells Fargo & Company’s acquisition of Wachovia Corporation. JP Morgan Chase & Co. received exemptions that allowed JP Morgan Chase Bank, N.A. to, among other things, extend credit to, and issue guarantees on behalf of, former Bear Stearns entities and to purchase a derivatives portfolio valued at approximately $44 billion from Bear Stearns. In November 2008, the Federal Reserve Board granted an exemption to allow Wells Fargo Bank, N.A., to extend up to $17 billion in credit to Wachovia Bank, N.A. to assist it in meeting its short-term funding obligations until the merger was completed. For many of these cases, the Federal Reserve Board granted an exemption to help facilitate liquidity support to nonbank entities as part of its actions to reduce systemic risk and promote financial stability. In granting exemptions, the Federal Reserve Board imposed conditions that were intended to mitigate risks to the bank that would be providing credit, purchasing assets, or engaging in other transactions with affiliates. However, one expert has raised concerns that such conditions might not offer sufficient protection for an insured depository institution during crisis conditions and that these exemptions in aggregate resulted in a large- scale transfer of safety net benefits created for banks to the nonbank, or “shadow banking,” system. As discussed in the next section of this report, the Dodd-Frank Act made changes to Section 23A of the Federal Reserve Act. In addition to introducing emergency programs with broad-based eligibility, federal government agencies took special actions with respect to individual financial institutions on several occasions in 2008 and 2009. While these actions were intended to benefit a range of market participants and the broader financial system, some large U.S. bank holding companies received substantial direct benefits from these actions. Such actions included (1) assistance from multiple agencies to rescue or facilitate the acquisition of troubled firms whose failures posed significant risks to the financial system, and (2) the Federal Reserve Board granting bank holding company status to several nonbank financial companies and providing liquidity support to the London broker-dealers of a few of the largest bank holding companies. On several occasions in 2008 and early 2009, the federal government provided extraordinary support to or facilitated the acquisition of large financial institutions, which benefitted recipients of this assistance and other market participants, such as firms that had large risk exposures to these institutions. Assistance to Facilitate JP Morgan’s Acquisition of Bear Stearns. In 2008, the Federal Reserve Board authorized emergency assistance to avert the failure of Bear Stearns Companies, Inc. (Bear Stearns) and facilitate the acquisition of the firm by JP Morgan Chase & Co. On Friday, March 14, 2008, the Federal Reserve Board voted to authorize FRBNY to provide a $12.9 billion loan to Bear Stearns to enable the firm to avoid bankruptcy and to provide time for potential acquirers, including JP Morgan Chase & Co, to assess its financial condition. On Sunday, March 16, 2008, the Federal Reserve Board announced that FRBNY would lend up to $30 billion against certain Bear Stearns assets to facilitate JP Morgan Chase & Co’s acquisition of Bear Stearns. During the following week, the terms of this assistance were renegotiated, resulting in the creation of a new lending structure under which a $28.82 billion FRBNY senior loan and a $1.15 billion JP Morgan Chase & Co subordinated loan funded the purchase of certain Bear Stearns’s assets. FRBNY also provided certain regulatory exemptions to JP Morgan Chase & Co. in connection with its agreement to acquire Bear Stearns. For example, the Federal Reserve Board granted an 18-month exemption to allow JP Morgan Chase & Co to exclude certain Bear Stearns assets from its risk- weighted assets for purposes of applying risk-based capital requirements. Assistance to Government-Sponsored Enterprises. Extraordinary government support to Fannie Mae and Freddie Mac helped to stabilize mortgage markets and the broader financial markets and provided specific benefits to bank holding companies and other firms that likely would have incurred losses if the federal government had allowed these government-sponsored enterprises to fail. On September 6, 2008, the Federal Housing Finance Agency placed Fannie Mae and Freddie Mac into conservatorship out of concern that their deteriorating financial condition threatened their safety and soundness and their ability to fulfill their public mission. Treasury’s investments in Fannie Mae and Freddie Mac under the Senior Preferred Stock Purchase Agreements program represent the federal government’s single largest risk exposure remaining from its emergency actions to assist the financial sector. As of June 30, 2013, cumulative cash draws by the GSEs under this program totaled $187.4 billion and cumulative dividends paid by the GSEs to Treasury totaled $131.6 billion. Assistance to AIG. Federal government actions to prevent the failure of AIG benefitted AIG and its counterparties—which included some of the largest U.S. and foreign financial institutions—and were intended to benefit the broader financial system. In September 2008, the Federal Reserve Board and Treasury determined that market events could have caused AIG to fail, which would have posed systemic risk to financial markets. The Federal Reserve Board and Treasury collaborated to make available up to $182.3 billion in assistance to AIG. This assistance, which began with a revolving credit facility of up to $85 billion from FRBNY, was provided in several stages and was restructured over time. In November 2008, the Federal Reserve Board authorized the creation of two special-purpose vehicles— Maiden Lane II LLC and Maiden Lane III LLC—to purchase certain AIG-related assets. Maiden Lane II was created to alleviate capital and liquidity pressures arising from a securities lending portfolio operated by certain AIG subsidiaries by purchasing residential MBS held in this portfolio. Maiden Lane III helped to fund the purchase of collateralized debt obligations from AIG counterparties that had purchased CDS from AIG to protect the value of those assets. AIG repaid all loans and capital investments it received from government entities during the crisis. In December 2012, Treasury sold its remaining investments in AIG, resulting in a total positive return of $22.7 billion for Treasury and FRBNY. Extraordinary Assistance to Citigroup. On November 23, 2008, Treasury, the Federal Reserve Board, and FDIC announced a package of additional assistance to Citigroup Inc. (Citigroup) that included $20 billion of capital from TIP and a loss-sharing agreement with the government entities that was intended to assure market participants that Citigroup would not fail in the event of larger-than- expected losses on certain of its assets. As discussed in our April 2010 report on Treasury’s use of the systemic risk determination, Treasury, FDIC, and the Federal Reserve Board said they provided emergency assistance to Citigroup because they were concerned that a failure of a firm of Citigroup’s size and interconnectedness would have systemic implications. As of September 30, 2008, Citigroup was the second largest banking organization in the United States, with total consolidated assets of approximately $2 trillion. In June 2009, Treasury entered into an agreement to exchange the $25 billion in Citigroup preferred shares purchased in its initial CPP investment for Citigroup common shares to help improve Citigroup’s capital position. In December 2009, Citigroup repaid the $20 billion TIP investment. On December 23, 2009, Citigroup announced that it had entered into an agreement with FDIC, FRBNY, and Treasury to terminate the loss- sharing agreement. As part of the termination agreement, Citigroup agreed to pay a $50 million termination fee to FRBNY. Extraordinary Assistance to Bank of America. On January 16, 2009, Treasury, the Federal Reserve Board, and FDIC announced a similar package of assistance to Bank of America Corporation (Bank of America). The additional assistance included capital through TIP and a loss-sharing agreement that was similar to the one executed for Citigroup. While Bank of America received $20 billion in capital through TIP, the government entities never finalized the announced loss-sharing agreement with Bank of America. In September 2009, the agencies agreed to terminate the loss-sharing agreement with Bank of America. As part of the agreement to terminate the agreement-in-principle, Bank of America paid fees of $276 million to Treasury, $57 million to the Federal Reserve Board, and $92 million to FDIC. Bank of America repaid its $20 billion TIP investment in December 2009. In late 2008, at the height of the financial crisis, the Federal Reserve Board approved applications by several large nonbank financial firms to convert to bank holding company status. Becoming bank holding companies provided these firms with greater access to emergency government funding support, while subjecting them to oversight by the Federal Reserve System and other requirements under the Bank Holding Company Act. Eligibility for TARP capital investments under CPP and debt guarantees through TLGP were generally restricted to depository institutions and their holding companies, and several large firms that became bank holding companies in late 2008 subsequently participated in one or both of these programs. Among the largest firms converting to bank holding companies during the crisis were two investment banks (Goldman Sachs Group, Inc. and Morgan Stanley), two companies that were large providers of credit card products and other services (American Express Company and Discover Financial Services), and two other financial firms (CIT Group Inc. and GMAC LLC). In many cases, obtaining bank holding company status involved firms converting an industrial loan corporation (ILC) into a bank. Federal Reserve Board officials noted that these firms already had access to the discount window through their ILCs and converting these ILCs to banks did not change their access to the discount window, but their access to discount window liquidity was limited by the amount of assets these subsidiaries—first as ILCs and later as banks—could pledge to the discount window as collateral. According to Federal Reserve Board documents, deposits held by these firms were a small fraction of their total consolidated assets at the time they became bank holding companies. While bank holding companies are subject to restrictions on nonbanking activities under the Bank Holding Company Act, Federal Reserve Board orders approving bank holding company applications described nonbanking activities of the companies that were permissible under the act and noted that the act provides each newly formed bank holding company 2 years to conform its existing nonbanking investments and activities to the act’s requirements. On September 21, 2008, the Federal Reserve Board announced that FRBNY would extend credit—on terms similar to those applicable for PDCF loans—to the U.S. and London broker-dealer subsidiaries of Goldman Sachs Group, Inc., Morgan Stanley, and Merrill Lynch & Co. to provide support to these subsidiaries as they became part of bank holding companies that would be regulated by the Federal Reserve System. On November 23, 2008, in connection with other actions taken by Treasury, FDIC, and the Federal Reserve Board to assist Citigroup, the Federal Reserve Board authorized FRBNY to extend credit to the London-based broker-dealer of Citigroup on terms similar to those applicable to PDCF loans. Enacted in July 2010, the Dodd-Frank Act contains provisions intended to modify the scope of federal safety nets for financial firms, place limits on agency authorities to provide emergency assistance, and strengthen regulatory oversight of the largest firms, among other things. FDIC and the Federal Reserve Board have finalized certain changes to traditional safety nets for insured banks, but impacts of the act’s provisions to limit the scope of financial transactions that benefit from these safety nets will depend on how they are implemented. The act also prohibits regulators’ use of emergency authorities to rescue an individual institution and places other restrictions on these authorities. For example, the act effectively removes FDIC’s authority to provide assistance to a single, specific failing bank outside of receivership and grants FDIC new authority to resolve a large failing institution outside of bankruptcy. FDIC has made progress toward implementing its new resolution authority and continues to work to address potential obstacles to the viability of its resolution process as an alternative to bankruptcy, such as challenges that could arise when resolving more than one large institution concurrently. The act also places new restrictions and requirements on the Federal Reserve Board’s emergency lending authority. However, the Federal Reserve Board has not yet completed its process for drafting policies and procedures required by the act to implement these changes or set timeframes for doing so. Finalizing such procedures would help ensure that any future use of this authority complies with Dodd-Frank Act requirements. Finally, the Federal Reserve Board has made progress towards implementing certain enhanced regulatory standards that are intended to reduce the risks that the largest financial institutions pose to the financial system. The Dodd-Frank Act instituted a series of reforms related to the traditional safety nets for insured banks, including changes to deposit insurance and discount window reporting requirements. In addition, the act contains provisions intended to limit the scope of financial transactions that benefit from access to these traditional safety nets. These provisions include revisions to the Federal Reserve Board’s authority to permit certain transactions between banks and their affiliates under Section 23A of the Federal Reserve Act, restrictions on the ability of bank holding companies to engage in proprietary trading; and restrictions on the ability of insured banks to engage in certain derivatives transactions. FDIC has implemented Dodd-Frank Act provisions that increased the deposit insurance limit and required FDIC to change the calculation for premiums paid by insured depository institutions. Section 335 of the Dodd-Frank Act permanently raised the standard maximum deposit insurance amount from $100,000 to $250,000 for individual deposit accounts, as previously discussed. FDIC issued and made effective a final rule instituting the increase in August 2010 and required insured depository institutions to comply by January 2011. Section 343 of the act provided temporary unlimited deposit insurance coverage for certain uninsured deposits from December 2010 through December 2012. This coverage expired on December 31, 2012, and transaction accounts can now only be insured to the $250,000 ceiling. Section 331 of the Dodd- Frank Act required FDIC to amend its regulation and modify the definition of an insured depository institution’s assessment base, which can affect the amount of deposit insurance assessment the institution pays into the deposit insurance fund. Under the Dodd-Frank Act, the assessment base changed from total domestic deposits to average consolidated total assets minus average tangible equity (with some possible exceptions). FDIC issued a final rule changing the assessment base in February 2011, and the rule became effective in April 2011. According to FDIC, the change in the assessment base calculation shifted some of the overall assessment burden from community banks to larger institutions that rely less on domestic deposits for their funding than smaller institutions, but without affecting the overall amount of assessment revenue collected. In the quarter after the rule became effective, those banks with less than $10 billion in assets saw a 33 percent drop in their assessments (from about $1 billion to about $700 million), while those banks with over $10 billion in assets saw a 17 percent rise in their assessments (from about $2.4 billion to about $2.8 billion). The Dodd-Frank Act made changes to the Federal Reserve Board’s reporting requirements to increase the transparency for discount window transactions. During and after the crisis, some members of Congress and others expressed concern that certain details of the Federal Reserve System’s discount window and emergency lending activities, including the names of borrowers receiving loans, were kept confidential. Section 1103 of the Dodd-Frank Act requires the Federal Reserve Board to disclose transaction-level details for discount window loans and open market transactions on a quarterly basis after a 2-year delay. The Dodd-Frank Act established similar reporting requirements for the Federal Reserve Board’s Section 13(3) authority, as discussed later. No rulemaking was required, and the Federal Reserve Board began to post the data publicly on its website in September 2012. The first set of releases covered loans made between July and September 2010, and data for subsequent periods are being published quarterly with a 2-year lag. The Dodd-Frank Act also grants GAO authority to audit certain aspects of discount window transactions occurring after July 21, 2010. The Dodd-Frank Act made numerous changes to Section 23A of the Federal Reserve Act that both significantly expanded the scope of activities covered by Section 23A’s restrictions and created new requirements for participation by FDIC and the OCC in granting exceptions. As previously discussed, the Federal Reserve Board granted a number of exemptions to Section 23A during the crisis. Some observers have raised concerns that these exemptions in aggregate resulted in a large scale transfer of federal safety net benefits to the nonbank, or “shadow banking,” system. The changes listed below, with the exception of changes related to investments in private funds, did not require rulemakings and became effective on July 21, 2012. The Dodd-Frank Act gave FDIC and OCC, jointly with the Federal Reserve Board, the authority to grant Section 23A exemptions by order for institutions they supervise. The Dodd-Frank Act requires the regulators to notify FDIC of any proposed exemption and give FDIC 60 days to object in writing, should FDIC determine the proposed exemption constitutes an unacceptable risk to the deposit insurance fund. The Federal Reserve retains the authority to grant exemptions by regulation. The Dodd-Frank Act expanded the scope of activities that are covered by Section 23A by amending the definition of covered transactions to include derivatives transactions with affiliates and transactions with affiliates that involve securities lending and borrowing that may cause a bank to face credit exposure to an affiliate. The Dodd-Frank Act also removed the exception from the 10 percent quantitative limit for certain covered transactions between a bank and its financial subsidiary and extended section 23A and 23B to cover permitted investments in certain private funds. The Dodd-Frank Act changed the collateral requirements for 23A transactions by requiring banks to maintain the correct level of collateral at all times for covered transactions subject to collateralization. Previously, banks only had to post collateral at the time of entrance into the covered transaction. This change was designed to strengthen the protection granted to banks extending credit to their affiliates by ensuring that the collateral remains correctly valued and simultaneously shields the bank’s interest from fluctuations in market prices of collateralized assets. As of October 2013, the Federal Reserve Board has granted only two exemptions since the enactment of the Dodd-Frank Act, according to available information on its website. How the Federal Reserve Board, FDIC, and OCC might respond to requests for exemptions in the future is uncertain. Representatives from one large bank told us that their primary regulator advised them that that because of FDIC’s required approval, they should not expect exemptions to be available going forward. However, one academic has expressed concern about how exemptions might be applied under different circumstances, such as in periods of economic stress. Proprietary Trading (Volcker Rule) Agencies have not yet issued final rules to implement the Dodd-Frank Act’s restrictions on proprietary trading—trading activities conducted by banking entities for their own account as opposed to those of their clients. A number of market participants and researchers with whom we spoke maintain that the ability of banking entities to use federally insured deposits to seek profits for their own account provides incentives for them to take on excessive risk. To address these concerns, Section 619 of the Dodd-Frank Act (also known as the Volcker Rule) generally prohibits proprietary trading by insured depository institutions and their affiliates and places restrictions on sponsorship or investment in hedge and private equity funds. An FSOC study noted that implementing the act’s restrictions on proprietary trading will be challenging because certain trading activities exempted from the act’s restrictions may appear very similar to proprietary trading activities that the act seeks to restrict. While regulators issued proposed rules in November 2011 and February 2012, no final or interim final rules have been issued. Section 716 of the Dodd-Frank Act requires banks that are registered dealers of derivatives known as swaps to transfer certain swap activities to nonbank affiliates, or lose access to deposit insurance and the Federal Reserve System liquidity provided through the discount window for certain activities taken in connection with the swap entity’s swap business. Section 716’s prohibition on federal assistance to swaps entities became effective in July 2013, but the law allowed for an initial 2- year extension as well as an additional 1-year extension. Several banks applied for and were granted 2-year extensions by the Federal Reserve Board and OCC, and those financial institutions now have until July 2015 to comply, with the additional option of applying for another 1-year exemption. The Dodd-Frank Act restricts emergency authorities used by financial regulators during the most recent financial crisis, such as FDIC’s open bank assistance authority; provides FDIC with new resolution authority to resolve a large, complex failing firm in a manner that limits the disruption to the financial system; and establishes a requirement for certain firms to develop and submit to regulators resolution plans (known as living wills) for their resolution under bankruptcy. The Dodd-Frank Act restricts FDIC’s authority to provide open bank assistance to an individual failing bank outside of receivership and replaces it with a new authority, subject to certain restrictions and a joint resolution of congressional approval, to create a debt-guarantee program with broad-based eligibility. Previously, FDIC could provide open bank assistance upon a joint determination by FDIC, the Federal Reserve Board, and the Secretary of the Treasury that compliance with certain cost limitations would result in serious adverse effects on economic conditions or financial stability and that such assistance could mitigate these systemic effects. Sections 1104 through 1106 of the Dodd-Frank Act provide permanent authority for FDIC to establish a widely available program to guarantee certain debt obligations of solvent insured depository institutions or solvent bank holding companies during times of severe economic distress, upon a liquidity event finding. In addition, institutions would have to pay fees for these guarantees as they did under TLGP during the crisis. In order for FDIC to exercise the authority, the Dodd-Frank Act requires the Secretary of the Treasury (in consultation with the President) to determine the maximum amount of debt outstanding that FDIC can guarantee, and the guarantee authority requires congressional approval. Furthermore, the Dodd-Frank Act amendments to the Federal Deposit Insurance Act that provided for temporary unlimited deposit insurance for noninterest-bearing transaction accounts were repealed as of January 1, 2013. The FDIC may not rely on this authority or its former systemic risk exception authority to provide unlimited deposit insurance for transaction accounts in a future crisis. The Dodd-Frank Act includes two key reforms intended to facilitate the orderly resolution of a large failing firm without a taxpayer-funded rescue: (1) the Orderly Liquidation Authority (OLA), through which FDIC can liquidate large financial firms outside of the bankruptcy process; and (2) requirements for bank holding companies with $50 billion or more in assets and nonbank financial companies designated by FSOC to formulate and submit to regulators resolution plans (or “living wills”) that detail how the companies could be resolved in bankruptcy in the event of a material financial distress or failure. OLA gives FDIC the authority, subject to certain constraints, to liquidate large financial firms, including nonbanks, outside of the bankruptcy process. This authority allows for FDIC to be appointed receiver for a financial firm if the Secretary of the Treasury determines that the firm’s failure and its resolution under applicable federal or state law, including bankruptcy, would have serious adverse effects on U.S. financial stability and no viable private sector alternative is available to prevent the default of the financial company. While the Dodd-Frank Act does not specify how FDIC must exercise its OLA resolution authority and while a number of approaches have been considered, FDIC’s preferred approach to resolving a firm under OLA is referred to as Single Point-of-Entry (SPOE). Under the SPOE approach, FDIC would be appointed receiver of a top- tier U.S. parent holding company of the financial group determined to be in default or in danger of default following the completion of the appointment process set forth under the Dodd-Frank Act. Immediately after placing the parent holding company into receivership, FDIC would transfer some assets (primarily the equity and investments in subsidiaries) from the receivership estate to a bridge financial holding company. By taking control of the firm at the holding company level, this approach is intended to allow subsidiaries (domestic and foreign) carrying out critical services to remain open and operating. One key factor for the success of the SPOE approach is ensuring that the holding company builds up sufficient loss-absorbing capacity to enable it to recapitalize its subsidiaries, if necessary. In a SPOE resolution, at the parent holding company level, shareholders would be wiped out, and unsecured debt holders would have their claims written down to reflect any losses that shareholders cannot cover. Under the Dodd-Frank Act, officers and directors responsible for the failure cannot be retained. FDIC expects the well-capitalized bridge financial company and its subsidiaries to borrow in the private markets and from customary sources of liquidity. The new resolution authority under the Dodd- Frank Act provides a back-up source for liquidity support, the Orderly Liquidation Fund, which could provide liquidity support to the bridge financial company if customary sources of liquidity are unavailable. The law requires FDIC to recover any losses arising from a resolution by assessing bank holding companies with $50 billion or more in consolidated assets, nonbank financial holding companies designated for supervision by the Federal Reserve System, and other financial companies with $50 billion or more in consolidated assets. Progress has been made to implement the reforms related to resolving large, complex financial institutions. FDIC has largely completed the core rulemakings necessary to carry out its systemic resolution responsibilities. For example, FDIC approved a final rule implementing OLA that addressed the priority of claims and the treatment of similarly situated creditors. The FDIC plans to seek public comment on its resolution strategy by the end of 2013. In addition, FDIC has worked with other financial regulatory agencies, both domestic and foreign, to make extensive preparations and to conduct planning exercises in order to be as prepared as possible to successfully resolve a firm whose failure could threaten the stability of the financial system. Although progress has been made, FDIC and others have acknowledged that OLA is new and untested, and several challenges to its effectiveness remain. For example, FDIC could face difficulties in effectively managing the failure of one or more large bank holding companies or credibly imposing losses on the creditors of those holding companies. These challenges include the following: Financial stability concerns. FDIC may find it difficult to impose losses on all creditors of failing financial institutions because of concerns about financial stability. FDIC could in principle transfer certain bank holding company liabilities to a bridge holding company in order to protect those creditors. This concern has been subject to debate. For example, a report by the Bipartisan Policy Center, a think-tank, emphasized the importance of protecting short-term creditors of systemically important firms, while an industry association report emphasized the importance of imposing losses on short-term creditors in order to maintain market discipline. While the Dodd- Frank Act allows FDIC to treat similarly situated creditors differently, it places restrictions on FDIC’s ability to do so. Any transfer of liabilities from the receivership to the bridge financial company that has a disparate impact upon similarly situated creditors will only be made if such a transfer will maximize the return to those creditors left in the receivership and if such action is necessary to initiate and continue operations essential to the bridge financial company. Global cooperation. Some experts have questioned how FDIC would handle issues related to the non-U.S. subsidiaries of a failed firm. For example, if a global U.S. firm were at risk of being placed in receivership under OLA, foreign regulators might act to ring-fence assets of a non-U.S. subsidiary to prevent these assets from being transferred abroad where they would not be available to protect counterparties in their jurisdiction. Such a development could increase financial instability by reducing the assets available to a U.S. firm to satisfy creditors’ claims. Because SPOE involves losses borne only by holding company creditors, some observers have suggested this approach would avoid potential challenges associated with the failure of foreign subsidiaries or actions of foreign regulators to ring-fence the assets of a subsidiary. For example, if subsidiary liabilities were guaranteed under SPOE, foreign regulators would not need to ring- fence foreign subsidiaries in order to protect foreign customers or creditors. Multiple, simultaneous insolvencies. Experts have questioned whether FDIC has sufficient capacity to use OLA to handle multiple failures of systemically important firms and thus prevent further systemic disruption. In addition, FDIC may find it more difficult to impose losses on creditors when multiple large institutions are failing at once, which could reduce the credibility of OLA. According to a survey of investors, few respondents believed that FDIC could effectively use OLA to handle the resolution of multiple firms simultaneously. Title I of the Dodd-Frank Act requires bank holding companies with $50 billion or more in consolidated assets and nonbank financial companies designated by FSOC to formulate and submit to FDIC, the Federal Reserve Board, and FSOC resolution plans (or “living wills”) that detail how the companies could be resolved in the event of material financial distress or failure. The Federal Reserve Board and FDIC finalized rules relating to resolution plans, and the large financial institutions that were the first firms required to prepare such plans submitted these to regulators as expected in July 2012. Regulators reviewed these initial plans and developed guidance on what information should be included in 2013 resolution plan submissions. Experts have expressed mixed views on the usefulness of the living wills. Some experts have noted that resolution plans may provide regulators with critical information about a firm’s organizational structure that could aid the resolution process or motivate complex firms to simplify their structures, and this simplification could help facilitate resolution. However, other experts have told us that resolution plans may provide limited benefits in simplifying firm structures, in part because tax, jurisdictional, and other considerations may outweigh the benefits of simplification. Furthermore, some experts commented that although resolution plans may assist regulators in gaining a better understanding of the structures and activities of complex financial firms, the plans may not be useful guides during an actual liquidation—in part because the plans could become outdated or because the plans may not be helpful during a crisis. The Dodd-Frank Act creates new restrictions and requirements associated with the Federal Reserve Board’s Section 13(3) authority. Generally, the act prohibits use of Section 13(3) authority to assist an individual institution (as the Federal Reserve Board did with Bear Stearns and AIG). While the act continues to allow the Federal Reserve Board to use 13(3) authority to authorize programs with broad-based eligibility, it sets forth new restrictions and requirements for such programs. For example, the act prohibits a Reserve Bank from lending to an insolvent firm through a broad-based program or creating a program designed to remove assets from a single and specific institution’s balance sheet. According to Federal Reserve Board staff, under its current Section 13(3) authority, the Federal Reserve Board could re-launch emergency programs to assist the repurchase agreement, commercial paper, and other credit markets, if these markets became severely strained and if the program is broad-based and meets the other requirements imposed by the Dodd-Frank Act. The Dodd-Frank Act also includes additional transparency and reporting requirements should the Federal Reserve Board exercise its Section 13(3) authority. Although the Dodd-Frank Act requires the Federal Reserve Board to promulgate regulations that establish policies and procedures governing any future lending under Section 13(3) authority, Federal Reserve Board officials told us that they have not yet completed the process for drafting these policies and procedures. Federal Reserve Board staff have made progress in drafting these policies and procedures by regulation, but have not set time frames for completing and publicly proposing a draft regulation. While there is no mandated deadline for completion of the procedures, the Dodd-Frank Act does require the Federal Reserve Board to establish the policies and procedures “as soon as is practicable.” According to a Federal Reserve Board official, in implementing its regulatory responsibilities under the Dodd-Frank Act, the Federal Reserve Board has focused first on the required regulations that have statutory deadlines and the regulations which are specifically directed at enhancing the safety and soundness of the financial system. Although the act did not set a specific deadline, the Federal Reserve Board can better ensure accountability for implementing rulemaking and more timely completion of these procedures by setting internal timelines for completing the rulemaking process. Furthermore, finalizing these policies and procedures could help the Federal Reserve Board to ensure that any future emergency lending does not assist an insolvent firm and complies with other Dodd-Frank Act requirements. Completing these policies and procedures could also address prior recommendations we made with respect to the Federal Reserve System’s emergency assistance programs. For example, in our July 2011 report, we recommended that the Chairman of the Federal Reserve Board direct Federal Reserve Board and Reserve Bank staff to set forth the Federal Reserve Board’s process for documenting its justification for each use of section 13(3) authority. We noted that more complete documentation could help the Federal Reserve Board ensure that it is complying with the Dodd-Frank Act’s requirement on its use of this authority. The Federal Reserve Board agreed that this prior report’s recommendations would benefit its response to future crises and agreed to strongly consider how best to respond. The Dodd-Frank Act also introduced a number of regulatory changes designed to reduce the risks that the largest financial institutions pose to the financial system. A notable change is a set of new prudential requirements and capital standards designed to strengthen the regulatory oversight and capital base of large financial institutions. The Federal Reserve Board has made progress towards implementing these enhanced regulatory standards. The Dodd-Frank Act requires the Federal Reserve Board to create enhanced capital and prudential standards for bank holding companies with $50 billion or more in consolidated assets and nonbank financial holding companies designated by FSOC. The act’s provisions related to enhanced prudential standards for these covered firms include the following: Risk-based capital requirements and leverage limits. The Federal Reserve Board must establish capital and leverage standards, which as proposed would include a requirement for covered firms to develop capital plans to help ensure that they maintain capital ratios above specified standards, under both normal and adverse conditions. In addition, the Federal Reserve Board has announced its intention to apply capital surcharges to some or all firms based on the risks firms pose to the financial system. Liquidity requirements. The Federal Reserve Board must establish liquidity standards, which as proposed would include requirements for covered firms to hold liquid assets that can be used to cover their cash outflows over short periods. Single-counterparty credit limits. The Federal Reserve Board must issue rules that, in general, limit the total net credit exposure of a covered firm to any single unaffiliated company to 25 percent of its total capital stock and surplus. Risk management requirements. Publicly traded covered firms must establish a risk committee and be subject to enhanced risk management standards. Stress testing requirements. The Federal Reserve Board is required to conduct an annual evaluation of whether covered firms have sufficient capital to absorb losses that could arise from adverse economic conditions. Debt-to-equity limits. Certain covered firms may be required to maintain a debt-to-equity ratio of no more than 15-to-1. Early remediation. The Federal Reserve Board is required to establish a regulatory framework for the early remediation of financial weaknesses of covered firms in order to minimize the probability that such companies will become insolvent and the potential harm of such insolvencies to the financial stability of the United States. Some of these rules have been finalized, while others have not. For example, in October 2012, the Federal Reserve Board issued a final rule implementing the supervisory and company-run stress test requirements. In December 2012, the Federal Reserve Board issued proposed regulations designed to implement enhanced prudential standards and early remediation requirements for foreign banking organizations and foreign nonbank financial companies. The Federal Reserve Board intends to satisfy some aspects of the Dodd- Frank Act’s heightened prudential standards rules for bank holding companies with total consolidated assets of $50 billion or more through implementation of the new Basel Committee on Banking Supervision standards, known as Basel III. The new standards seek to improve the quality of regulatory capital and introduce a new minimum common equity requirement. Basel III also raises the quantity and quality of capital required and introduces capital conservation and countercyclical buffers designed to better ensure that banks have sufficient capital to absorb losses in a future crisis. In addition, Basel III establishes for the first time an international leverage standard for internationally active banks. Consistent with that intention, in July 2013 FDIC, the Federal Reserve Board, and OCC finalized a rule that revised risk-based and leverage capital requirements for banking organizations. The interim final rule implements a revised definition of regulatory capital, a new common equity Tier 1 minimum capital requirement, a higher minimum Tier 1 capital requirement, and a supplementary leverage ratio that incorporates a broader set of exposures in the denominator. In addition, in July 2013 FDIC, the Federal Reserve Board, and OCC proposed a rule to establish a new leverage buffer. Specifically, the proposed rule requires bank holding companies with more than $700 billion in consolidated total assets or $10 trillion in assets under custody to maintain a Tier 1 capital leverage buffer of at least 2 percent above the minimum supplementary leverage ratio requirement of 3 percent, for a total of 5 percent. In addition to the leverage buffer for covered bank holding companies, the proposed rule would require insured depository institutions of covered bank holding companies to meet a 6 percent supplementary leverage ratio to be considered “well capitalized” for prompt corrective action purposes. The proposed rule would take effect beginning on January 1, 2018. During the 2007-2009 financial crisis, federal agencies determined that expanding support to insured banks through traditional safety nets—the discount window and deposit insurance—would not be sufficient to stem disruptions to important credit markets. The Federal Reserve System, Treasury, and FDIC introduced new programs to provide general funding support to the financial sector, and some of these programs provided support at the bank holding company level or directly to nonbank financial institutions. These programs helped to improve financial conditions, and bank holding companies and their subsidiaries also experienced individual benefits from participating in particular programs, including liquidity benefits from programs that allowed them to borrow at lower interest rates and at longer maturities than might have been available in the markets. In addition, the Federal Reserve Board granted exemptions to allow banks to channel additional funding support to nonbank financial firms that lacked direct access to the federal safety nets for insured depository institutions. Government assistance to prevent the failures of large financial institutions—such as Fannie Mae, Freddie Mac, and AIG— also benefited bank holding companies, their subsidiaries, and other firms that had large risk exposures to these institutions. While these actions collectively helped to avert a more severe crisis, they raised concerns about moral hazard and the appropriate scope of federal safety nets for the financial sector. The Dodd-Frank Act contains provisions that aim to restrict future government support for financial institutions, but the effectiveness of these provisions will depend in large part on how agencies implement them. Among other things, the act places new restrictions on the Federal Reserve Board and FDIC’s emergency authorities and grants FDIC new resolution authority to resolve a large failing institution outside of the bankruptcy process. While the act continues to allow the Federal Reserve Board to use its authority under Section 13(3) of the Federal Reserve Act to authorize programs with broad-based eligibility, it sets forth new restrictions and requirements for such programs, including a requirement that lending not assist insolvent firms. The act also requires the Federal Reserve Board to establish policies and procedures governing future actions under this authority. As of the date of this report, the Federal Reserve Board has not yet completed its process for drafting these policies and procedures and has not set time frames for doing so. A Federal Reserve Board official indicated that the Board of Governors has focused first on completion of other required regulations that have statutory deadlines and the regulations that are specifically directed at enhancing the safety and soundness of the U.S. financial system. While the act did not set a specific deadline, setting time frames could help ensure more timely completion of these policies and procedures. Moreover, finalizing these procedures could help the Federal Reserve Board to ensure that any future emergency lending does not assist a failing firm and complies with other new requirements. Consistent with the changes to Federal Reserve Board authorities, the act removes FDIC’s authority to provide open bank assistance under the systemic risk exception while allowing FDIC (subject to congressional approval) to provide certain assistance through a broadly available program. FDIC continues to work to implement its new resolution authority. The viability and credibility of its resolution process as an alternative to placing a systemically important firm into bankruptcy is a critical part of removing market expectations of future extraordinary government assistance. The act also contains provisions to limit the scope of financial transactions that benefit from access to federal safety nets, although it remains to be seen how these provisions will be implemented. For example, the act could result in fewer regulatory exemptions allowing banks to provide additional funding to their nonbank affiliates. Finally, certain provisions of the act that require the Federal Reserve Board to subject the largest financial firms to heightened prudential standards have not been fully implemented but could reduce the risks that those institutions pose to the financial system. To better ensure that the design and implementation of any future emergency lending programs comply with Dodd-Frank Act requirements in a timely manner, we recommend that the Chairman of the Board of Governors of the Federal Reserve System set timeframes for completing the process for drafting policies and procedures governing the use of emergency lending authority under Section 13(3) of the Federal Reserve Act. We provided copies of this draft report to the FDIC, the Federal Reserve Board, FSOC, OCC, and Treasury for their review and comment. We also provided excerpts of the draft report for technical comment to the Federal Housing Finance Agency. All of the agencies provided technical comments, which we have incorporated, as appropriate. In its written comments, which are reprinted in appendix V, the Federal Reserve Board accepted our recommendation and noted that it has made progress toward completing draft policies and procedures governing the use of its emergency lending authority under Section 13(3) of the Federal Reserve Act. The Federal Reserve Board’s letter referred to its Chairman’s July 2013 remarks on the status of these efforts. The Chairman said that he was hopeful that a final product would be completed relatively soon, perhaps by the end of this year. He further noted that in the meantime, the law is clear about what the Federal Reserve Board can and cannot do. Based on these remarks, we conducted further audit work at the Federal Reserve Board and revised our draft to include additional information about the Federal Reserve Board’s progress towards drafting the required policies and procedures. While the Federal Reserve Board has made progress on a draft regulation, it has not set timeframes for completing the drafting process and issuing a final regulation. Setting timeframes for completing draft and final policies and procedures would help to ensure more timely completion of the rulemaking process. Furthermore, while certain restrictions outlined in the act may not require clarification by rulemaking, the Dodd-Frank Act explicitly directs the Federal Reserve Board to draft policies and procedures to help ensure that it complies with the full set of new restrictions and requirements the act imposes on its emergency lending authority. In its response, the Federal Reserve Board also noted that Federal Reserve System and FDIC assistance was repaid with interest and suggested that it would be helpful for GAO, perhaps in a future report, to analyze the offsetting costs paid by financial institutions assisted through the emergency programs. We note that our draft report contained some information and analyses related to such offsetting costs. In table 1 on pages 14 through 16, we describe the key terms of selected broad-based programs, including interest, fees, and dividends that participating institutions were required to pay for this assistance. Furthermore, our draft report noted that one indicator of the extent to which an institution benefitted from participation in an emergency government program is the relative price of estimated market alternatives to the program. On pages 21 through 29, we report the results of our analyses of the pricing terms of some of the largest programs that provided funding support to bank holding companies and other eligible financial institutions. While past GAO reports have reported on the income earned by the Federal Reserve System, FDIC, and Treasury on their crisis interventions, this information is not relevant to this report’s discussion of the support that bank holding companies received during the government’s attempt to stabilize the financial system. As we discussed, these government interventions helped to avert a more severe crisis, but raised questions about moral hazard as market participants may expect similar emergency actions in future crises. Treasury also provided written comments, which are reprinted in appendix VI. Treasury noted that the emergency programs discussed in the report were necessary to prevent a collapse of the financial system and that they created economic benefits not only for individual firms, large and small, but also for the financial system and the broader economy. Treasury also observed that the Dodd-Frank Act reforms discussed in our draft report were consistent with its commitment to ending “too big to fail.” In separate comments provided via email, Treasury and FSOC provided suggestions related to the report’s analyses of the pricing and utilization of selected emergency programs. In response to these suggestions, we added additional information about the exclusion of observations from our pricing analyses, and added data on average assets per institution to Table 3 in appendix IV, among other changes. Treasury and FSOC also suggested that GAO consider using different benchmarks for analyzing the pricing for the Federal Reserve System’s CPFF and FDIC’s DGP. While analyses of these suggested benchmarks (short-dated bond prices for CPFF and 2-3 year bond prices for DGP borrowers) could provide useful insights into the robustness of our results, these analyses also have limitations and would not necessarily improve on the analyses of the benchmarks that we conducted. We concluded that the analyses included in our report are appropriate. As noted in the report, while these analyses have limitations, we determined that they are sufficient for our purposes. We note that Federal Reserve System and FDIC staff with whom we discussed our selected benchmarks for these programs agreed that the benchmarks we used in our pricing analysis are appropriate. We are sending copies of this report to FDIC, the Federal Reserve Board, FSOC, OCC, Treasury, interested congressional committees, members, and others. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at (202) 512-8678 or EvansL@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. The objectives of our report were to examine: (1) support banks and bank holding companies received as a result of government efforts to stabilize financial markets during the financial crisis of 2007-2009; and (2) recent statutory and regulatory changes related to government support for banks and bank holding companies and factors that could impact the effectiveness of these changes. In terms of scope, the first section of this report addresses benefits that bank holding companies and their subsidiaries received during the crisis from actual government support provided through emergency actions. It does not address benefits that some financial institutions may have received and may continue to receive from perceived government support. In a second report to be issued in 2014, we will report the results of our examination into whether the largest bank holding companies have received funding cost or other economic advantages as a result of expectations that the government would not allow them to fail. To address our first objective, we reviewed documents from financial regulatory agencies—the Board of Governors of the Federal Reserve System (Federal Reserve Board), the Federal Deposit Insurance Corporation (FDIC), and the Department of the Treasury (Treasury) and analyzed agency data on emergency government actions to stabilize financial markets. Our review focused on (1) emergency government programs that provided funding support to bank holding companies or their subsidiaries as well as other eligible financial institutions, (2) government actions that provided extraordinary assistance to individual financial institutions, and (3) regulatory exemptions that allowed banks to engage in certain transactions with their nonbank affiliates. To identify the programs that provided the most significant funding support directly to bank holding companies or their subsidiaries, we reviewed program eligibility rules and data on program participation for programs created during the 2007-2009 financial crisis by Treasury, FDIC, and the Federal Reserve System. Specifically, we identified a set of emergency programs created during the crisis that provided at least $10 billion in direct funding support to bank holding companies or their subsidiaries. We determined that these programs included Treasury’s Capital Purchase Program (CPP); FDIC’s Temporary Liquidity Guarantee Program (TLGP); and the Federal Reserve System’s Term Auction Facility (TAF), Primary Dealer Credit Facility (PDCF), Term Securities Lending Facility (TSLF), and Commercial Paper Funding Facility (CPFF). To describe the purpose, terms, and conditions of these programs and other emergency government actions discussed in our first objective, we reviewed agency documents and included information and analyses from prior GAO work on the Troubled Asset Relief Program (TARP), the Federal Reserve System’s emergency programs, and other emergency assistance provided to the financial sector. To obtain perspectives on the benefits that bank holding companies received from emergency government actions, we reviewed papers by staff of regulators and other subject-matter experts and interviewed federal financial regulators, representatives of bank holding companies that received emergency government assistance, and academics. For the Federal Reserve System and FDIC programs that were among the programs that provided the most significant funding support, we compared the pricing and terms of this assistance (such as interest rates and fees) to indicators of funding market conditions during normal and crisis conditions. While this analysis provides a measure of program pricing versus potential market alternatives, it does not produce a precise quantification of the benefits that accrued to participating financial institutions. To determine the extent to which emergency equity support programs, CPP and the Targeted Investment Program (TIP), were priced more generously than estimated market alternatives, we reviewed estimates of the expected budget cost associated with equity funding support programs as well as a valuation analysis commissioned by the Congressional Oversight Panel (COP). For more information about the methodology for our analysis of the pricing and terms of these programs and associated limitations, see appendix III. For programs that provided the most significant direct funding support, to compare the extent to which banking organizations of various sizes used these emergency programs, we calculated the percentage of banking organization assets that were supported by emergency programs—either through capital injections, loans, or guarantees—at quarter-end dates for 2008 through 2012. For more information about our methodology for analyzing program utilization, see appendix IV. Finally, we obtained and analyzed Federal Reserve Board documentation of Federal Reserve Board decisions to grant exemptions to Section 23A of the Federal Reserve Act and approve applications from financial companies to convert to bank holding company status. To address our second objective, we identified and reviewed relevant statutory provisions, regulations, and agency documents. To identify recent statutory and regulatory changes related to government support for banks and bank holding companies, we reviewed sections of the Dodd- Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) that change rules or create new requirements for safety net programs for insured depository institutions; further restrict the types of financial activities that can be conducted by insured depository institutions or their holding companies; make changes to agencies’ emergency authorities to assist or resolve financial institutions; and subject the largest bank holding companies to enhanced regulatory oversight and standards. To corroborate our selection of Dodd-Frank Act provisions, we obtained the views of regulatory officials and financial markets experts on the provisions that are related to government support for banks and bank holding companies. To update the status of agencies’ efforts to implement these provisions, we reviewed agencies’ proposed and final rules, and interviewed staff from FDIC, the Federal Reserve Board, the Office of the Comptroller of the Currency, and Treasury. We also reviewed relevant congressional testimonies and other public statements by agency officials. We identified statutory provisions or requirements that agencies had not fully implemented and interviewed agency staff about planned steps to complete implementation. To describe factors that could impact the effectiveness of relevant provisions, we reviewed prior GAO work on the potential impacts of Dodd-Frank Act provisions. To obtain additional perspectives on factors that could impact the effectiveness of these provisions, we interviewed and reviewed the public statements and analyses of agency officials, academics, and market experts. For parts of our work that involved the analysis of computer-processed data, we assessed the reliability of these data and determined that they were sufficiently reliable for our purposes. Data sets for which we conducted data reliability assessments include Federal Reserve Board transaction data for TAF, PDCF, TSLF, and CPFF; Treasury transaction data for CPP and TIP; and FDIC transaction data for TLGP programs (the Debt Guarantee Program and the Transaction Account Guarantee Program). We have relied on Federal Reserve Board and Treasury transaction data for their respective emergency programs for past reports, and we determined that these data were sufficiently reliable for the purpose of presenting and analyzing the pricing and utilization of these programs. To assess the reliability of FDIC’s TLGP data, we interviewed FDIC staff about steps they took to maintain the integrity and reliability of program data. We also assessed the reliability of data sources used to provide indicators of the pricing and terms for market alternatives that could have been available to institutions that participated in these programs. These data sources were interbank interest rates (the London Interbank Offered Rate), additional interest rates from the Federal Reserve, credit default swap spreads from Bloomberg, repurchase agreement interest rates from IHS Global Insight, and repurchase agreement haircuts from the Federal Reserve Bank of New York. To assess the reliability of these data we took a number of steps including inspecting data for missing observations, corroborating interest rate data with other sources, and discussing data with agency officials. We determined these data were sufficiently reliable for measuring market alternatives that might have been available to participants in emergency programs. To calculate the average percentage of assets supported by emergency programs for banking organizations of different sizes, in addition to the program transaction data discussed above, we used Y-9 data for bank holding companies from the Federal Reserve Bank of Chicago, demographic data for bank holding companies and other emergency program participants from the Federal Reserve System’s National Information Center and SNL Financial, balance sheet and demographic data for depository institutions from FDIC, and gross domestic product price index data from the Bureau of Economic Analysis. To assess the reliability of these data, we reviewed relevant documentation. In addition, for the Y-9 data for bank holding companies from the Federal Reserve Bank of Chicago and the balance sheet data for depository institutions from FDIC, we conducted electronic testing of key variables. We conducted this performance audit from January 2013 through November 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. During the financial crisis, the Federal Reserve System, Treasury, and FDIC introduced new programs with broad-based eligibility to provide general funding support to the banking sector and stabilize the financial system. Federal government interventions that provided the most significant direct funding support to U.S. bank holding companies or their subsidiaries were: Treasury’s capital investments through the Troubled Asset Relief the Federal Reserve System’s credit and liquidity programs; FDIC’s guarantees of certain newly issued debt and previously uninsured deposits through the Temporary Liquidity Guarantee Program (TLGP). The first of these interventions occurred in late 2007 when the Federal Reserve System modified discount window terms and launched a new program to auction discount window loans to banks to address strains in interbank credit markets. Discount window. In August 2007, the cost of term funding (loans provided at terms of 1 month or longer) spiked suddenly—primarily due to investor concerns about banks’ actual exposures to various mortgage-related securities—and commercial banks increasingly had to borrow overnight to meet their funding needs. The Federal Reserve Board feared that the disorderly functioning of interbank lending markets would impair the ability of commercial banks to provide credit to households and businesses. To ease stresses in these markets, on August 17, 2007, the Federal Reserve Board approved two temporary changes to discount window terms: (1) a reduction of the discount rate—the interest rate at which the Reserve Banks extended collateralized loans at the discount window—by 50 basis points; and (2) an extension of the discount window lending term from overnight to up to 30 days, with the possibility of renewal. This change initially resulted in little additional borrowing from the discount window. After subsiding in October 2007, tensions in term funding markets reappeared in late November, possibly driven by a seasonal contraction in the supply of year-end funding. Term Auction Facility (TAF). On December 12, 2007, the Federal Reserve Board announced the creation of TAF to address continuing disruptions in U.S. term interbank lending markets. TAF provided term funding to depository institutions eligible to borrow from the discount window. In contrast to the traditional discount window program, which loaned funds to individual institutions at the discount rate, TAF auctioned loans to many eligible institutions at once at a market-determined interest rate. Federal Reserve Board officials noted that one important advantage of this auction approach was that it could address concerns among eligible borrowers about the perceived stigma of discount window borrowing. TAF was the largest Federal Reserve System emergency program in terms of the dollar amount of funding support provided, with TAF loans outstanding peaking at $493 billion in March 2009. In March 2008, the Federal Reserve Board invoked its emergency authority under Section 13(3) of the Federal Reserve Act to authorize two new programs to support repurchase agreement markets—large, short- term collateralized funding markets—that many financial institutions rely on to finance a wide range of securities. The Federal Reserve Board limited eligibility for these programs to the primary dealers, a designated group of broker-dealers and banks that transact with the Federal Reserve Bank of New York (FRBNY) in its conduct of open market operations. Many of the primary dealers are subsidiaries of U.S. bank holding companies or large foreign banking organizations. Term Securities Lending Facility (TSLF). On March 11, 2008, the Federal Reserve Board announced the creation of TSLF to auction 28-day loans of U.S. Treasury securities to primary dealers to increase the amount of high-quality collateral available for these dealers to borrow against in the repurchase agreement markets. In early March, the Federal Reserve Board found that repurchase agreement lenders were requiring higher haircuts for loans against a range of securities and were becoming reluctant to lend against mortgage-related securities. As a result, many financial institutions increasingly had to rely on higher-quality collateral, such as U.S. Treasury securities, to obtain cash in these markets, and a shortage of such high quality collateral emerged. Through competitive auctions that allowed dealers to bid a fee to exchange harder-to-finance collateral for easier-to-finance Treasury securities, TSLF was intended to promote confidence among lenders and to reduce the need for dealers to sell illiquid assets into the markets, which could have further depressed the prices of these assets. The market value of TSLF securities loans outstanding peaked at $236 billion in October 2008. Primary Dealer Credit Facility (PDCF). On March 16, 2008, the Federal Reserve Board announced the creation of PDCF to provide overnight collateralized cash loans to the primary dealers. In the days following the March 11 announcement of TSLF, one of the primary dealers, Bear Stearns, experienced a run on its liquidity. Because the first TSLF auction would not be held until later that month, Federal Reserve Board and FRBNY staff worked to ready PDCF for launch by Monday, March 17, 2008, when Federal Reserve Board officials feared a Bear Stearns bankruptcy announcement might trigger runs on the liquidity of other primary dealers. Although the Bear Stearns bankruptcy was averted, PDCF commenced operation on March 17, 2008. Eligible PDCF collateral initially included collateral eligible for open-market operations as well as investment-grade corporate securities, municipal securities, and asset-backed securities, including private label mortgage-backed securities. The Federal Reserve Board later expanded eligible collateral types for both TSLF and PDCF. In late 2008, the bankruptcy of Lehman Brothers triggered an intensification of the crisis and the Federal Reserve System, Treasury and FDIC took a range of new actions to provide additional support to financial institutions and key credit markets. Federal Reserve System actions. In September and October 2008, the Federal Reserve Board modified its existing programs, launched new programs, and took other actions to address worsening market conditions. Modifications to TSLF, PDCF, and TAF. On September 14, 2008, shortly before Lehman Brothers announced it would file for bankruptcy, the Federal Reserve Board announced changes to TSLF and PDCF to provide expanded liquidity support to primary dealers. Specifically, the Federal Reserve Board announced that TSLF-eligible collateral would be expanded to include all investment-grade debt securities and PDCF-eligible collateral would be expanded to include all securities eligible to be pledged in the tri-party repurchase agreements system, including noninvestment grade securities and equities. On September 29, 2008, the Federal Reserve Board also announced expanded support through TAF by doubling the amount of funds that would be available in each TAF auction cycle from $150 billion to $300 billion. Commercial Paper Funding Facility (CPFF). On October 7, 2008, the Federal Reserve Board announced the creation of CPFF under its Section 13(3) authority to provide a liquidity backstop to U.S. issuers of commercial paper. Commercial paper is an important source of short-term funding for U.S. financial and nonfinancial businesses. CPFF became operational on October 27, 2008, and was operated by FRBNY. In the weeks leading up to CPFF’s announcement, the commercial paper markets showed signs of strain: the volume of commercial paper outstanding declined, interest rates on longer-term commercial paper increased significantly, and increasing amounts of commercial paper were issued on an overnight basis as money-market funds and other investors became reluctant to purchase commercial paper at longer-dated maturities. By standing ready to purchase eligible commercial paper, CPFF was intended to eliminate much of the risk that commercial paper issuers would be unable to issue new commercial paper to replace their maturing commercial paper obligations. Other actions. The Federal Reserve System launched other new programs that provided liquidity support for other market participants, but did not serve a major source of direct support for U.S. bank holding companies or their subsidiaries. Troubled Asset Relief Program. On October 3, 2008, the Emergency Economic Stabilization Act of 2008 (EESA) was signed into law to help stem the financial crisis. EESA provided Treasury with the authority to create the Troubled Asset Relief Program (TARP), under which it could buy or guarantee up to almost $700 billion of the “troubled assets” that it deemed to be at the heart of the crisis, including mortgages, mortgage-backed securities, and any other financial instruments, such as equity investments. Treasury created the Capital Purchase Program (CPP) in October 2008 to provide capital to viable financial institutions by using its authority to purchase preferred shares and subordinated debt. In return for its investments, Treasury received dividend or interest payments and warrants. On October 14, 2008, Treasury allocated $250 billion of the original $700 billion in overall TARP funds for CPP. The allocation was subsequently reduced in March 2009 to reflect lower estimated funding needs, as evidenced by actual participation rates. The program was closed to new investments on December 31, 2009. Smaller capital infusion programs included the Targeted Investment Program (TIP) and the Community Development Capital Initiative (CDCI). Temporary Liquidity Guarantee Program. In October 2008, FDIC created TLGP to complement the Federal Reserve and Treasury programs in restoring confidence in financial institutions and repairing their capacity to meet the credit needs of American households and businesses. TLGP’s Debt Guarantee Program (DGP) was designed to improve liquidity in term-funding markets by guaranteeing certain newly issued senior unsecured debt of financial institutions and their holding companies. By guaranteeing payment of these debt obligations, DGP was intended to address the difficulty that creditworthy institutions were facing in replacing maturing debt because of risk aversion in the markets. TLGP’s Transaction Account Guarantee Program (TAGP) also was created to stabilize an important source of liquidity for many financial institutions. TAGP temporarily extended an unlimited deposit guarantee to certain noninterest-bearing transaction accounts to assure holders of the safety of these deposits and limit further outflows. By facilitating access to borrowed funds at lower rates, Treasury, FDIC, and the Federal Reserve expected TLGP to free up funding for banks to make loans to creditworthy businesses and consumers. Furthermore, by promoting stable funding sources for financial institutions, they intended TLGP to help avert bank and thrift failures that would impose costs on the insurance fund and taxpayers and potentially contribute to a worsening of the crisis. Although imperfect, one indicator of the extent to which an institution benefited from participation in an emergency program is the relative price of estimated market alternatives to the program. To determine how pricing of the emergency assistance compared to market rates, we compared the interest rates and fees charged by the Federal Reserve and FDIC for participation in the emergency lending and guarantee programs with market alternatives that might have been available to program participants. We considered a number of potential indicators of market interest rates available to financial institutions, including a survey of interbank interest rates (the London Interbank Offered Rate or LIBOR), commercial paper interest rates published by the Federal Reserve Board, spreads on bank credit default swaps (CDS), and interest rates on repurchase agreements. These interest rates and spreads provide a general indication of market alternatives available to participants but are imperfect and hence unlikely to reflect available alternatives for all participants at all points in time. For example, participants’ access to market alternatives may have been limited, there may be only limited data on the relevant private market, or market alternatives could vary across participants in ways that we do not observe in the data. Furthermore, once programs were introduced, they probably influenced the price of market alternatives, making it difficult to interpret differences between program pricing to contemporary market pricing while programs were active. Where possible—when programs had pricing rules (PDCF, CPFF, and DGP)—we applied program pricing rules during time periods that were not influenced by the program itself to compare program pricing with counterfactual market prices. By choosing high and low financial stress time periods, we can estimate the extent to which participants may have benefitted from program pricing during the financial crisis as well as the extent to which program pricing became less attractive as financial conditions returned to normal. Programs with auction-based pricing (TAF and TSLF) raise particular challenges in interpreting differences between program pricing and market pricing. Under certain assumptions, bidders would bid program pricing up to their market alternatives, which could limit potential benefits from the program as well as eliminate any difference between program and market pricing. In addition, without a pricing rule we cannot apply pricing for auction-based programs to high or low financial stress time periods not influenced by the program itself—in other words, contemporaneous pricing is contaminated by the program itself, making it difficult to determine the true market alternative. As a result, deviations between program and market pricing could indicate differences in terms rather than a benefit to participating financial institutions. These challenges suggest that our estimates of the difference between program and market pricing for auction-based programs should be interpreted with caution. TAF and TSLF also had minimum pricing determined by the Federal Reserve that was prescribed when auctions were undersubscribed. In these instances prices were no longer auction- determined in the traditional sense although the outcome of the auction (undersubscription) determined when the minimum pricing would apply. It is important to note that, among other limitations, our indicators do not capture all the benefits associated with program participation. Because our proxies for market alternatives are imperfect, market prices appear on occasion to be lower than emergency program pricing despite significant participation by financial institutions at these times. Participation by itself suggests that program prices and/or terms were relatively attractive in comparison to available alternatives—benefits could arise from price, quantity available, or other nonprice characteristics of the assistance (loan term, eligible collateral, etc.). Therefore, we discarded values of spreads between program pricing and market alternatives when they were zero or negative since negative spreads are unlikely to capture the benefits that accrued to participants. If these truly reflected market alternatives for the pool of potential participants, then there would be no participation or the participation would have been based on other nonprice considerations. We assume that the true (unobserved) market alternatives overlap at times with our observed proxies. At other times the market alternatives we are able to observe and measure may not overlap with the true market alternatives for participants (including when observed market alternatives indicate programs are more expensive than market rates). Because PDCF operated similarly to repurchase agreement markets, we compared collateral haircuts in PDCF with select asset classes in the triparty (intermediated by a clearing bank) repurchase agreement markets. We selected those asset classes where we were able to draw clear parallels between categories of collateral allowed under PDCF and categories identified in data based on private repurchase agreement market we received from the Federal Reserve Bank of New York. The haircut is an important loan term in repurchase agreement contracts and collateralized lending, the amount of additional collateral required over the value of the loan that is required to secure the loan. Securities with greater risk or less liquidity generally have larger haircuts (i.e., more collateral is required). PDCF borrowers might have utilized triparty repurchase agreement markets for alternative sources of secured borrowing during the 2007-2009 financial crisis. To determine the extent to which emergency equity support programs, CPP and TIP, were priced more generously than estimated market alternatives, we reviewed estimates of the expected budget cost associated with equity funding support programs as well as a valuation analysis commissioned by the Congressional Oversight Panel (COP). The benefits that accrued to banks from participation in equity funding support programs are likely to be proportional to the subsidy rates estimated for accounting purposes. Estimates of subsidy rates are based on a net present value analysis—the price and terms which are offered by a federal agency are compared to the lifetime expected cost (net present value) of the equity and the difference is known as a subsidy. Because private market participants might have charged a price based on a comparable net present value analysis, banks would have benefitted to the extent that the prices offered by Treasury for their equity exceed what they were likely to receive based on the net present value. The valuation analysis commissioned by COP explicitly compared the prices received by Treasury with market-based valuations of similar securities. We assume that the net present values estimated for accounting purposes by Treasury and CBO are reasonable proxies for the market valuations that are more directly estimated in the COP analysis. We used the earliest available estimates from the Congressional Budget Office (CBO) and Treasury as they were closest to market conditions at the time that programs were initiated. Estimates of these subsidy rates depended on timing and market conditions and the size of these subsidy rates likely fell over time as market conditions improved. Emergency government programs to stabilize financial markets resulted in funding support to bank holding companies and insured depository institutions (collectively, banking organizations) of various sizes. To compare use of emergency funding programs by banking organizations of different sizes, we analyzed quarterly data on bank holding companies and depository institutions for the period from 2008 to 2012 along with data on emergency program transactions that occurred during that period. We used quarterly balance sheet and demographic data on bank holding companies for the period from the first quarter of 2008 through the fourth quarter of 2012 from the Federal Reserve Bank of Chicago and the Federal Reserve System’s National Information Center (NIC), quarterly balance sheet and demographic data on depository institutions from FDIC for the period from the first quarter of 2008 through the fourth quarter of 2012, and quarterly data on the GDP price index from the Bureau of Economic Analysis (BEA) for the period from the first quarter of 2008 through the fourth quarter of 2012. We also used data on Debt Guarantee Program (DGP) and TAGP transactions from FDIC, data on Commercial Paper Funding Facility, Primary Dealer Credit Facility, TAF, and TSLF transactions from the Board of Governors of the Federal Reserve System, and data on CPP and TIP transactions from the U.S. Department of the Treasury. Finally, we used demographic data on emergency funding program participants obtained from NIC and from SNL Financial. We organized depository institutions and bank holding companies into groups—hereafter banking organizations—based on their regulatory high holder (the highest holding company in a tiered organization), where depository institutions or bank holding companies that did not indicate a high holder are assumed to be their own high holder. We calculated consolidated assets for each banking organization, excluding banking organizations for which we cannot reliably calculate consolidated assets. We excluded banking organizations with a high holder that was not in our data, e.g., banking organizations with foreign high holders. For banking organizations with a high holder that was in our data and that included at least one bank holding company, we excluded those for which the high holder did not report consolidated assets, those for which the high holder reported consolidated assets but they were less than its parent-only assets, those for which the high holder’s consolidated assets were less than consolidated assets reported by some other bank holding company in the organization, those for which none of the bank holding companies reported consolidated assets, and those that did not contain any depository institutions. For all remaining banking organizations that contained at least one bank holding company, we set consolidated assets for the group equal to consolidated assets reported by the high holder. Note that consolidated assets for a bank holding company include the assets of all consolidated subsidiaries, which generally include all companies for which the bank holding company owns more than 50 percent of the outstanding voting stock. For banking organizations with a high holder in our data that did not include a bank holding company, such as standalone depository institutions, we set consolidated assets for the banking organization equal to the depository institution’s consolidated assets. Banking organizations for which we could reliably calculate consolidated assets constitute our analysis sample. Small bank holding companies (those with assets less than $500 million) generally report their consolidated assets in the second and fourth quarters of each year, but they generally do not do so in the first and third quarters of each year. To maintain consistency in the composition of the analysis sample over time, we ultimately used results for only the second and fourth quarters of each year from 2008 to 2012. Companies that converted to bank holding companies during the crisis are included in our analysis only for the quarters for which they filed financial statements for bank holding companies with the Federal Reserve. For example, both Goldman Sachs Group, Inc. and Morgan Stanley became bank holding companies in September 2008 but neither filed form FR Y-9C, the source of our data on consolidated assets for large bank holding companies, until the first quarter of 2009. As a result, these two companies are not part of our analysis sample until 2009. We assigned banking organizations in our analysis sample to one of six size groups based on their consolidated assets, adjusted for inflation and expressed in fourth quarter 2012 dollars: less than $500 million; at least $500 million and less than $1 billion; at least $1 billion and less than $10 billion; at least $10 billion and less than $50 billion; at least $50 billion and less than $250 billion; and $250 billion or more. Table 3 shows the numbers of banking organizations in our analysis sample by size group and the numbers of banking organizations excluded from our analysis sample for the second and fourth quarters of each year from 2008 to 2012. Table 3. Numbers and average assets of banking organizations in analysis sample by size and quarter, June 30, 2008 to December 31, 2012. For each banking organization in our analysis sample, we calculated the percentage of assets funded by capital provided, loans provided, and liabilities guaranteed by emergency programs at quarter-end for the second and fourth quarters of 2008 through 2012. Capital provided by emergency programs includes capital investments by Treasury under CPP and TIP. Loans provided by emergency programs include TAF, TSLF, PDCF, and CPFF loans from the Federal Reserve System. Funding guaranteed by emergency programs includes deposits guaranteed by FDIC through TAGP and debt guaranteed by FDIC through DGP. To compare the extent to which banking organizations of various sizes used emergency programs, we calculated the percentage of banking organization assets that were supported by emergency programs—either through capital injections, loans, or guarantees—at quarter-end dates from mid-2008 through the end of 2012. In addition, for each of the three types of support, we decomposed average support as a percentage of assets for banking organizations of different sizes into its two components: (1) the rate of participation in emergency programs by banking organizations of different sizes as measured by the percentage of banking organizations using funds provided or guaranteed by the emergency programs and (2) average support as a percentage of assets for those participants. Federal Reserve System programs. TAF was established in December 2007, PDCF and TSLF were established in March 2008, and CPFF began purchasing commercial paper in October 2008. As of the end of 2008, combined CPFF, PDCF, TAF, and TSLF loans outstanding ranged from about 0.01 percent of assets on average for all banking organizations with less than $500 million in assets to about 2.5 percent of assets on average for all banking organizations with at least $50 billion but less than $250 billion in assets (see fig. 3). For banking organizations with $250 billion or more in assets, combined CPFF, PDCF, TAF, and TSLF loans outstanding were about 2.0 percent of assets on average. As of mid-2009, loans outstanding for these four programs combined had declined to less than 1 percent of assets on average for banking organizations of all sizes, and as of the end of 2009, they had declined to less than half a percent of assets on average. Through mid-2009, the larger banking organizations participated in the four Federal Reserve System programs we analyzed at higher rates than smaller banking organizations (see Panel A of table 4). However, by the end of 2009, banking organizations with $250 billion or more in assets had completely exited all of these programs, but of the remaining institutions, larger banking organizations continued to participate at higher rates than smaller banking organizations. These programs all closed in the first quarter of 2010. Among banking organizations that participated in at least one of the four Federal Reserve programs, average combined CPFF, PDCF, TAF, and TSLF loans outstanding as a percentage of assets were generally larger for smaller participants (see Panel B of table 4). As of the end of 2008, among participating banking organizations, combined CPFF, PDCF, TAF, and TSLF loans outstanding ranged from about 2.6 percent of assets on average for participants with $250 billion or more in assets to about 6.0 percent of assets on average for participants with less than $500 million in assets. As of the end of 2009, combined CPFF, PDCF, TAF, and TSLF loans outstanding ranged from about 2.1 percent of assets for participants with at least $50 billion but less than $250 billion in assets to about 7.9 percent of assets for banking organizations with less than $500 million in assets, while banking organizations with $250 billion or more in assets were no longer participating in these programs. Treasury capital investments. Treasury began making equity investments in banking organizations through CPP in October 2008 and it established TIP in December 2008. As of the end of 2008, CPP investment amounts outstanding ranged from about 0.01 percent of assets on average for banking organizations with less than $500 million in assets to about 1.9 percent of assets on average for banking organizations with at least $50 billion but less than $250 billion in assets (see fig. 4). CPP and TIP investment amounts outstanding for banking organizations with $250 billion or more were about 1.6 percent of assets on average. As of mid-2010, banking organizations with $250 billion or more in assets had repaid Treasury and exited CPP and TIP. At the same time, CPP investment amounts had fallen to less than 1 percent of assets on average for banking organizations in all smaller size groups. As of the end of 2012, banking organizations with at least $50 billion but less than $250 billion in assets had repaid Treasury and exited CPP, and CPP investment amounts had fallen to less than 0.25 percent of assets on average for banking organizations in all smaller size groups. At the end of 2008, participation rates in CPP and TIP were higher for larger banking organizations and ranged from about 0.5 percent for banking organizations with less than $500 million in assets to about 87.5 percent for banking organizations with $250 billion or more in assets (see Panel A of Table 5). As of the end of 2010, all banking organizations with $250 billion or more in assets had repaid Treasury and were no longer participating in CPP or TIP. For banking organizations that continued to participate in CPP, participation rates ranged from about 4.8 percent for banking organizations with less than $500 million in assets to 35 percent for banking organizations with at least $50 billion but less than $250 billion in assets. As of the end of 2012, all banking organizations with $50 billion or more had exited CPP and TIP. For banking organizations that continued to participate in CPP, participation rates ranged from about 2.4 percent for banking organizations with less than $500 million in assets to about 6.5 percent for banking organizations with $1-10 billion in assets (see Panel A of table 5). For participating banking organizations of all sizes, average CPP and TIP amounts outstanding were 2 to 3 percent of assets in most quarters (see Panel B of table 5). FDIC’s TLGP. FDIC implemented DGP and TAGP, the two components of TLGP, in October 2008. As of the end of 2008, average DGP- guaranteed debt and TAGP-guaranteed deposit amounts outstanding altogether as a percentage of assets were higher for larger banking organizations than smaller banking organizations and ranged from about 1.5 percent of assets on average for banking organizations with less than $500 million in assets to 7.7 percent of assets on average for banking organizations with $250 billion or more in assets (see fig. 5). By the end of 2010, differences in utilization of DGP and TAGP across banking organizations of different sizes had diminished somewhat, with DGP- guaranteed debt and TAGP-guaranteed deposit amounts outstanding altogether ranging from 1.4 percent for banking organizations with $250 billion or more in assets to about 3.2 percent for banking organizations with at least $1 billion but less than $10 billion in assets. TAGP expired on December 31, 2010, and by the end of 2011, DGP-guaranteed debt amounts outstanding were less than 1 percent of assets on average for banking organizations of all sizes. DGP expired on December 31, 2012, so none of the assets of any banking organization were funded using DGP-guaranteed debt after that date. In general, 50 percent or more of the banking organizations in every size group were using either DGP-guaranteed debt or TAGP-guaranteed deposits (or both) as funding through the end of 2010 (see Panel A of table 6). At the end of 2008, participation rates ranged from about 66.3 percent for banking organizations with less than $500 million in assets to about 92.9 percent for banking organizations with at least $1 billion but less than $10 billion in assets. At the end of 2010, participation rates ranged from about 50 percent for banking organizations with at least $50 billion but less than $250 billion in assets to 100 percent for banking organizations with $250 billion or more in assets. Participation rates for banking organizations with less than $50 billion in assets fell after TAGP expired on December 31, 2010, and in mid-2011 ranged from about 0.04 percent for banking organizations with less than $500 million in assets to about 3.1 percent for banking organizations with at least $1 billion but less than $10 billion in assets. Participation rates were about 42.1 percent and 100 percent for banking organizations with at least $50 billion but less than $250 billion in assets and with $250 billion or more in assets, respectively, at that time. By mid-2012, only banking organizations with $50 billion or more were participating in DGP, which then expired at the end of 2012. At the end of 2008, average DGP-guaranteed debt and TAGP- guaranteed deposit amounts outstanding were higher as a percentage of assets for larger participants than for smaller participants and ranged from about 2.3 percent for participants with less than $500 million in assets to about 8.8 percent for participants with $250 billion or more in assets (see Panel B of table 6). At the end of 2010, average DGP- guaranteed debt and TAGP-guaranteed deposit amounts outstanding as a percentage of assets had fallen for banking organizations with $50 billion or more in assets but not for smaller banking organizations. At that time, DGP-guaranteed debt and TAGP-guaranteed deposit amounts outstanding ranged from about 1.4 percent of assets on average for participants with $250 billion or more in assets to about 5.6 percent of assets on average for participants with $10-50 billion in assets. TAGP expired on December 31, 2010, and as of the end of 2011, DGP- guaranteed debt amounts outstanding were less than 2 percent of assets on average for banking organizations of all sizes. DGP expired on December 31, 2012. Lastly, our analysis found that the six largest bank holding companies as of December 31, 2012—all with consolidated assets greater than $500 billion—used the emergency programs to varying degrees but had exited most by the end of 2009. Table 7 shows the percentage of consolidated assets funded by DGP-guaranteed debt, TAGP-guaranteed deposits, TAF loans, CPFF loans, PDCF loans, TSLF loans, and CPP and TIP equity investments for the largest bank holding companies at year-end from 2008 to 2012. For comparison purposes we also show the average percent of assets funded by the same programs for the six banking organization size groups over the same period. Table 7. Average outstanding amounts of equity provided, loans provided, and liabilities guaranteed by emergency programs for select bank holding companies and for banking organizations by size at year end, 2008-2012. Goldman Sachs Group, Inc. and Morgan Stanley became bank holding companies in September 2008 but did not file form FR Y-9C, the source of our data on consolidated assets, for the fourth quarter of 2008. In addition to the contact named above, Karen Tremba (Assistant Director), Jordan Anderson, Bethany M. Benitez, Stephanie Cheng, John Fisher, Michael Hoffman, Risto Laboski, Courtney LaFountain, Jon Menaster, Marc Molino, Robert Rieke, and Jennifer Schwartz made key contributions to this report.
The federal government extended unprecedented support to financial institutions to stabilize financial markets during the financial crisis. While these actions helped to avert a more severe crisis, they raised questions about the appropriate scope of government safety nets for financial institutions. GAO was asked to review the benefits that large bank holding companies (those with more than $500 billion in assets) have received from actual and implied government support. This is the first of two reports GAO will issue on this topic. This report examines (1) actual government support for banks and bank holding companies during the financial crisis, and (2) recent statutory and regulatory changes related to government support for banks and bank holding companies. GAO reviewed relevant statutes, regulations, and agency documents; analyzed program transaction data; and interviewed regulators, representatives of financial institutions, and academics. In a second report to be issued in 2014, GAO will examine any funding or other economic advantages the largest bank holding companies have received as a result of implied government support. During the 2007-09 financial crisis, the federal government's actions to stabilize the financial system provided funding support and other benefits to bank holding companies and their subsidiaries. Agencies introduced new programs with broad-based eligibility that provided funding support to eligible institutions, which included entities that were part of a bank holding company and others. Programs that provided the most significant support directly to bank holding companies or their subsidiaries included Department of the Treasury capital investment programs, Federal Reserve System lending programs, and Federal Deposit Insurance Corporation (FDIC) guarantee programs. Together these actions helped to stabilize financial conditions, while participating firms also accrued benefits specific to their own institutions, such as liquidity benefits from programs that allowed them to borrow at longer maturities and at interest rates that were below possible market alternatives. At the end of 2008, program use--measured for each institution as the percentage of total assets supported by the programs--was higher on average for banks and bank holding companies with $50 billion or more in total assets than for smaller firms. The six largest bank holding companies were significant participants in several emergency programs but exited most by the end of 2009. Differences in program use were driven in part by how institutions funded themselves. For example, while smaller banks relied more on deposit funding, larger bank holding companies relied more on short-term funding markets and participated more in programs that assisted these markets. In addition to these programs, the Board of Governors of the Federal Reserve System (Federal Reserve Board) granted several regulatory exemptions to allow banks to provide liquidity support to their nonbank affiliates and for other purposes. Finally, government assistance to individual troubled firms benefited these firms, their counterparties, and the financial system. The Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) contains provisions that aim to modify the scope of federal safety nets, restrict future government support and strengthen regulatory oversight for the banking sector, but implementation is incomplete and the effectiveness of some provisions remains uncertain. Agencies have finalized certain changes to traditional safety nets for insured banks, but impacts of provisions to limit the scope of transactions that benefit from these safety nets will depend on how they are implemented. The act also places restrictions on emergency authorities used by regulators during the crisis to assist financial firms. For example, it prohibits the use of these authorities to rescue a specific failing firm. The Federal Reserve Board is required by the act to establish policies and procedures implementing changes to its emergency authority under Section 13(3) of the Federal Reserve Act, but it has not completed its process for drafting the required procedures or set time frames for doing so. Setting time frames could help ensure more timely completion of these procedures. FDIC has made progress toward implementing its new authority under the Dodd-Frank Act to resolve a large failing firm. FDIC continues to work to address potential obstacles to the viability of its resolution process as an alternative to bankruptcy, such as challenges that could arise when resolving more than one failing firm. Finally, the Federal Reserve Board has finalized certain enhanced prudential standards for the largest financial firms intended to reduce the risks these firms could pose to the financial system. GAO recommends that the Federal Reserve Board establish timeframes for completing its process for drafting procedures related to its emergency lending authority to ensure timely compliance with Dodd-Frank Act requirements. The Federal Reserve Board accepted this recommendation.
The unique characteristics and relative abundance of wood have made it a natural material for a variety of uses, including homes and other structures, furniture, tools, vehicles, and decorative objects. Because wood varies in characteristics and volume by species, it may be heavy or light, stiff or flexible, and hard or soft. Federal agencies conduct research on the range of processes that occur between the time a tree is grown in the forest to the time it becomes a wood product and then is recycled. For purposes of our review, wood utilization research and product development refers to the activities that occur from harvesting the wood through the recycling of wood and paper products. (See fig. 1.) According to the North American Industry Classification System, the U.S. forest products industry is divided into two sectors: wood product manufacturing and pulp and paper manufacturing. The wood product manufacturing sector comprises small companies, while the pulp and paper manufacturing sector tends to have fewer, larger companies. The wood product manufacturing sector can be broken into three sub- sectors: (1) primary producers—sawmills and plywood mills; (2) secondary producers—millwork, cabinet, and furniture manufacturers; and (3) structural and reconstituted products producers—oriented strandboard (OSB), I-Joist, laminated veneer lumber, medium density fiberboard, and particleboard. The United States is the world’s leading producer of lumber and wood products used in residential construction and in commercial wood products. According to 2004 data (the most recent data available), the wood product sector employed 535,000 workers nationwide and produced shipments valued at $103 billion. The pulp and paper manufacturing sector includes two industry groups: (1) manufacturers of pulp and paper and (2) manufacturers of products made from purchased paper and other materials, such as paper bags or tissues. The vast majority of the raw material for making paper is the residue from other mills—primarily chips from sawmills. The United States is also a leader in the pulp and paper business, producing about 28 percent of the world’s pulp and 25 percent of the total world output of paper and paperboard. In 2004 (the most recent data available), the paper manufacturing sector employed 440,000 workers nationwide and produced shipments valued at $154 billion. According to a federal government report, the U.S. forest products industry faces increasing competition from its traditional competitors (Canada, the Scandinavian countries, and Japan), as well as from emerging competitors (Brazil, Chile, and Indonesia). Domestic purchases of paper and paperboard declined from 2000 to 2002, but have begun to rebound since then. Approximately 120,000 jobs were lost in the paper manufacturing sector from 1999 to 2004, representing a 21.5-percent loss. Sectors of the wood product manufacturing industry have also declined. According to a 2003 Forest Service report, during the last decade, the wood household furniture industry lost approximately one-third of its market share to imports. China now accounts for one-third of U.S. imports, up from none a decade ago. Federal research and product development in wood utilization helps provide the science and technology needed to conserve the nation’s forest resources, supply the demand for wood products, and support forest management and restoration activities. At least 12 federal agencies support wood utilization research and product development activities, but only 2 of these agencies—the Forest Service and CSREES—have programs targeted for these activities. For the Forest Service, the Forest and Rangeland Renewable Resources Act of 1978 is the primary legislation authorizing the Secretary of Agriculture to implement a comprehensive research program for forest and rangeland renewable resources, including wood utilization, and to disseminate the results. Other relevant legislation includes the following: The Biomass Research and Development Act of 2000, which requires the secretaries of Agriculture and of Energy to cooperate on policies and procedures that promote research and development leading to the production of fuels and biobased products; the act also established the Biomass Research and Development Initiative. The Energy Policy Act of 2005 established technical areas for focusing research under the Biomass Research and Development Initiative. The Healthy Forests Restoration Act of 2003 established a grant program to encourage the commercialization of woody biomass. The Forest Service’s research and development organization establishes research work units in the field by developing formal mission statements, which must be approved by the Deputy Chief for Research and other senior managers. A team from the Deputy Chief’s Office and station directors’ office formally reviews these mission statements and the unit’s work at least every 5 years, and the review often includes input from the public and private sectors. The Forest Service’s wood utilization research and product development is carried out by scientists and professional support staff in 27 research work units around the country that were operating at the time of our review. Most of the Forest Service’s wood utilization research and product development takes place at 16 research work units in the Forest Products Laboratory, which conducts research of national and international scope. The other 11 research work units are located in the Forest Service’s Northeastern, Southern, Pacific Northwest, Pacific Southwest, and Rocky Mountain Research Stations, and these units mostly focus on regional wood utilization issues. For example, research work unit 4104 of the Southern station focuses on managing Southern pine ecosystems, whereas research work unit 4701 of the Northeastern station focuses on efficiently using northern forest resources. These research work units produce 5-year research work plans that identify the mission, the problem to be solved through research, the proposed research approach, planned accomplishments, and staffing needs. CSREES provides support for wood utilization research and product development through several grant programs. CSREES awards committee- directed grants to 10 designated wood utilization research centers at 12 universities. The first three centers were established in fiscal year 1985 at Oregon State University, Mississippi State University, and Michigan State University. These three centers were established to support wood utilization and harvesting research on western conifers, southern pine, and eastern hardwoods, respectively. In fiscal year 1993, three centers with specific research focuses were added at the University of Maine, the University of Minnesota at Duluth, and North Carolina State University. In fiscal year 1999, the University of Tennessee and the Inland Northwest Forest Products Research Consortium were added. The consortium consists of the universities of Idaho and of Montana, and Washington State University. The most recent additions are the University of Alaska Southeast, in fiscal year 2000, and West Virginia University, in fiscal year 2004. Every year each center submits a grant proposal, reviewed by CSREES staff, containing information on proposed research activities, budgets, and progress to date. Funding supports scientists and graduate students and helps to pay for new equipment, supplies, and travel. In addition, CSREES provides grants to state-supported colleges and universities that can be used for, but are not specifically focused on, wood utilization research and product development through the following: The McIntyre–Stennis Act, a formula grant program, for forestry research, including two of eight potential funding areas focused on wood utilization and product development. The Hatch Act, a formula grant program, designed to fund a number of broad agricultural research areas. The National Research Initiative, a competitive grant program with several research areas, including biobased products and energy. Wood utilization research and product development grants have been awarded under this initiative, as well as under CSREES’ Small Business Innovation and Research grants and other small grants programs. Ten other agencies also support wood utilization research and product development. Table 1 provides information on these agencies’ principal authorizing legislation and a description of the programs that have supported wood utilization research and product development, and the mechanisms used for program delivery. The Federal Laboratory Consortium for Technology Transfer defines technology transfer as “the process by which existing knowledge, facilities or capabilities developed under federal research and development funding are utilized to fulfill public and private needs.” Since 1978, Congress has enacted a series of laws to promote technology transfer and to provide technology transfer mechanisms and incentives. Table 2 presents selected laws that support technology transfer for wood utilization research and product development. In addition to these laws, Executive Order 12591 (“Facilitating Access to Science and Technology”) directs federal agencies to encourage and facilitate collaboration among federal laboratories, state and local governments, universities, and the private sector—particularly small business—in order to assist in the transfer of technology to the marketplace. Technology transfer is also carried out through the nation’s extension system, established by the Smith-Lever Act in 1914, to assist in the development of practical applications of research knowledge in agriculture, including wood utilization. Under this system, thousands of county and regional extension specialists bring university expertise to the local level. Funding is provided by CSREES through annual formula grants to supplement state and county funds for extension services. The funds can be used for natural resources, including forestry or wood utilization, depending upon the priorities of the university. The Renewable Resources Extension Act of 1978 created the Renewable Resources Extension Program. Under this program, CSREES provides funds to 72 universities, which use these funds, along with state, local, and institutional funds, to deliver educational programs to forest and rangeland owners and managers. The program also provides guidance to states in developing their general extension programs for, among other things, timber utilization, harvesting, and marketing; wood utilization; and wood products marketing. These efforts have included wood utilization extension services, usually through extension specialists. Wood utilization research and product development conducted by 12 federal agencies span a broad spectrum of activities, and coordination of these activities is both formal and informal. These activities fall into five broad categories: (1) harvesting, (2) wood properties, (3) manufacturing and processing, (4) products and testing, and (5) economics and marketing. We grouped the wood utilization research and product development activities that the 12 agencies conduct into five broad categories: harvesting, wood properties, manufacturing and processing, products and testing, and economics and marketing. Table 3 shows the definitions we used for the five categories and provides examples of the types of the research and product development activities that fall into each of these categories. Table 4 shows the types of research and product development activities and examples of these activities by agency. All 12 agencies had activities in the manufacturing and processing category. The Forest Service and CSREES were the only two agencies that had wood utilization research and product development activities in all five categories. According to our analysis of the Forest Service’s 27 research work units’ plans covering fiscal years 1995 through 2005, over 80 percent of wood utilization research and product development occurred in three categories: wood properties, products and testing, and manufacturing and processing. In addition, CSREES wood utilization research centers’ annual research proposals for the same period showed that about 70 percent of their activities occurred in the following three categories: wood properties, manufacturing and processing, and economics and marketing. According to a CSREES official, the CSREES wood utilization research centers are allowed by law to use the funding to conduct technology transfer activities, which are reflected in the economics and marketing category. Appendixes II and III, respectively, provide detailed information on wood utilization research and product development activities for the Forest Service, for multiyear periods (beginning in the late 1980s) to the present; and CSREES, for fiscal years 1995 through 2005. We found instances of both informal and formal coordination of federal activities for wood utilization and product development. According to many scientists at the Forest Service, informal coordination occurs among the relatively small wood utilization research and product development community of scientists, and these scientists are often aware of related scientific research. Scientists share information at scientific and industry conferences and professional meetings and through publications, and in some cases work informally to share staff and equipment. Specific examples include the following: One Forest Service scientist associated with the Southern Research Station—with 30 years of experience in wood utilization research on Douglas Fir—shares resources and expertise with the Pacific Northwest Research Station on the plantation growth of this species. Forest Service scientists in the Southern Research Station have collaborated with colleagues in Australia, Denmark, Japan, and New Zealand on using wood from southern forests to develop wood composite products. These collaborative efforts were established primarily through professional relationships. A Forest Service scientist at the Pacific Northwest Research Station told us that scientists use annual professional meetings, such as those held by the Forest Products Society and the Society of Wood Science and Technology, as important mechanisms for coordinating their work and broadening the scope of their research area. The CSREES wood utilization research centers reported that they have more informal than formal coordination mechanisms with other wood utilization research centers and federal agencies. Like the Forest Service, these informal mechanisms include sharing information with their colleagues through professional meetings, publications, and newsletters. We also identified some formal mechanisms to coordinate wood utilization research and product development that are set up through legislative provisions, agency rulemaking, memorandums of understanding, cooperative arrangements, and other joint ventures. Specific examples include the following: The Biomass Research and Development Act of 2000 requires USDA and the Department of Energy to carry out a Biomass Research and Development Initiative under which competitively awarded grants, contracts, and financial assistance are provided to eligible entities to carry out research on fuels and products derived from biomass, including woody biomass. The agencies work together on developing grant solicitations, reviewing grant proposals, and selecting recipients. The act also created a Biomass Research and Development Board, co-chaired by the Department of Energy and USDA, to coordinate programs within the federal government for promoting the use of biobased fuels and products. The board’s mission is to maximize the benefits from federal grants and assistance by promoting collaboration and avoiding duplication of effort through strategic planning on biomass research. The board has approved the formation of a federal Woody Biomass Working Group to coordinate and focus federal efforts on woody biomass utilization. For 40 years, Forest Service wood utilization scientists have had standing annual meetings with representatives from both the paper and pulp and solid wood industries to present research results and obtain input and review from industry. When updating their research work unit plans every 5 years, these scientists also seek advice from outside sources, including industry representatives, academics, and environmental groups. Scientists also participate in research consortiums or cooperative arrangements with industry. For example, scientists in the Forest Service’s Southern Research Station participate in a consortium studying wood quality that has members from nine companies, including Weyerhaeuser and Georgia Pacific. CSREES wood utilization research centers also form cooperative arrangements. According to an Oregon scientist, these research cooperatives typically consist of 10 to 12 partners. The cooperatives set a research agenda and formally coordinate research through annual meetings and reports; each university, as well as government agencies, are asked to contribute funding annually. For example, scientists at the University of Minnesota wood utilization research center formed a productivity cooperative that includes state, county, university, and industry members (such as International Paper) to continue to strengthen applied forestry concepts and ensure the sustainability of Minnesota’s forest products industry. The Forest Service’s Northeastern Research Station formed the Furniture Steering Committee, which is composed of furniture manufacturers, consultants, equipment manufacturers, state economic development agencies, and universities to provide guidance on furniture research programs at the station and elsewhere. The steering committee recommended research on more efficient manufacturing and “just-in-time” training, which has been integrated into the research work unit’s plan. HUD’s Partnership for Advancing Technology in Housing is a voluntary partnership between leaders of the home building, product manufacturing, insurance, and financial industries; and representatives of six federal agencies concerned with housing. These six agencies work with HUD to develop technologies to improve the quality, durability, energy efficiency, and affordability of residential building materials; these materials could include wood. For example, with the partnership’s support, the Forest Service’s wood chemistry research work unit has been able to work cooperatively with laboratories in Japan, Sweden, and Finland on developing coatings to protect wood from the effects of weathering. Forest Service scientists at the Southern Research Station’s Utilization of Southern Forest Resources work unit have a memorandum of understanding with the Chinese government to host post-doctoral students from China; the station has hosted 25 students in the past 5 years. These students serve as additional staff resources to help the research work unit carry out its research activities. To construct a forest biomass life cycle assessment model, several partners established a joint venture: the Forest Service’s Pacific Southwest Research Station; the California Energy Commission’s Public Interest Energy Research Program; the University of California at Davis; several state and federal agencies; and energy, forestry, and environmental consultants. Partners will use the model to identify and analyze the social, economic, and environmental costs and benefits of using forest biomass to generate electrical power. This research project is planned in three phases over a 3- to 5-year period. Each participant shares in the cost of the venture. The 12 federal agencies we reviewed made available at least $54 million annually in financial support for wood utilization research and product development activities in fiscal years 2004 and 2005, measured either in budget authority or expenditures. Furthermore, the Forest Service employed almost 175 scientists and support staff in each of these two fiscal years. From fiscal years 1995 through 2005, the Forest Service received total budget authority of $268 million for wood utilization research and product development (or $289 million in 2004 inflation- adjusted dollars) while CSREES’ budget authority for the wood utilization research centers was about $51 million (or $55 million in 2004 inflation- adjusted dollars). For fiscal years 1995 through 2005, the Forest Service’s budget authority for wood utilization research and product development activities fluctuated moderately from year-to-year (in 2004 inflation- adjusted dollars). Over the same period, overall, CSREES’ budget authority for the wood utilization research centers increased (in 2004 inflation-adjusted dollars), in part because four new wood utilization research centers were added during fiscal years 1999, 2000, and 2004. The 12 federal agencies we identified as supporting wood utilization research and product development made available at least $54.4 million in financial support for this work, measured in either budget authority or expenditures, in fiscal year 2004, the year with the most complete data available. For fiscal year 2005, the agencies made available at least $54.3 million. Our data for fiscal year 2005 are complete except for data for the CSREES grants funded under the McIntyre-Stennis Act and the Hatch Act; the National Research Initiative; Small Business Innovation Research grants; and other small grants. See table 5. As table 5 shows, the Forest Service made available about half of the financial support for conducting wood utilization research and product development. In fiscal year 2004, the Forest Service made available about 52 percent of the $54.4 million, while four other agencies—CSREES, the Department of Energy, the National Science Foundation, and the Natural Resources Conservation Service—made available about 44 percent of the support; the remaining seven agencies together made available about 5 percent of the $54.4 million. Of the $54.4 million made available in fiscal year 2004, about $34 million ($28.3 million for the Forest Service and $5.7 million for the CSREES wood utilization research centers) was directly targeted to wood utilization research and product development. In addition, $1.9 million of other support targeted for wood utilization research and product development was made available by the Army, the Coast Guard, and the Office of Naval Research through committee-directed funding to specific universities to conduct research on wood composites. The remaining $18.5 million of the $54.4 million was made available in fiscal year 2004 from grant programs not targeted to wood utilization research and product development. That is, wood utilization research and product development was not the sole purpose of the grant or program. The Department of Energy made available the largest amount of this nontargeted support—$7.4 million. CSREES provided $3.0 million in fiscal year 2004 to support other wood utilization research and product development through grant programs authorized under the McIntyre- Stennis Act and the Hatch Act; the National Research Initiative; Small Business Innovation Research grants; and other small grants. The Natural Resources Conservation Service made available grant funding to promote greater innovation and development in all forms of biomass—including agricultural and woody biomass—with $5.3 million targeted to woody biomass research, under the Biomass Research Development Act of 2000. The other agencies made available the remaining $2.8 million. Of the 12 agencies, only the Forest Service directly employs full-time scientists and support staff to conduct wood utilization research and product development. Most of these employees work at the Forest Products Laboratory, as shown in table 6. The other 11 agencies we reviewed do not have full-time federal scientists dedicated to wood utilization research and product development, and were unable to provide information on scientists and support staff working on federal wood utilization research and product development activities. For fiscal years 1995 through 2005, the Forest Service received total budget authority for wood utilization research and product development of $268 million (which is equivalent to $289 million in 2004 inflation- adjusted dollars). As table 7 shows, during this 11-year period, the annual budget authority ranged between $24.2 million and $28.2 million (in 2004 inflation-adjusted dollars), with moderate fluctuations from year-to-year. Table 8 shows the total FTE scientists and support staff for the Forest Service’s wood utilization research work units, from fiscal years 1995 through 2005. As figure 2 shows, over the period, the levels of budget authority (adjusted for inflation) and FTE staff for wood utilization research and product development at the Forest Service fluctuated moderately. From fiscal year 1995 to fiscal year 1996, both budget authority (in 2004 inflation-adjusted dollars) and FTE staff at the Forest Service decreased by 14 percent and 4 percent, respectively. After 1996, budget authority for the most part increased through 2004 and then decreased in 2005. FTE staff continued to decrease through 1999, increased in 2000, and thereafter remained relatively stable. (See app. IV for information on changes in FTE Forest Service scientists and support staff for wood utilization research work units for each year from fiscal year 1995 through 2005.) During the 11-year period, the Forest Products Laboratory’s budget authority also fluctuated moderately. Between fiscal years 1995 and 2000, the budget authority declined by 17 percent (in 2004 inflation-adjusted dollars), from $20.8 million to $17.3 million; it increased again from fiscal years 2001 through 2004, but was still lower in 2005 than in 1995. (See table 9.) Table 10 shows the total FTE scientists and support staff for the Forest Products Laboratory’s wood utilization research work units, from fiscal years 1995 through 2005. The number of FTE Forest Products Laboratory scientists and support staff generally declined from fiscal years 1995 through 2000; then it fluctuated moderately. Figure 3 shows the changes in budget authority and FTE scientists and support staff at the Forest Products Laboratory. See appendix IV for funding and FTE staff, by research work unit, at the Forest Products Laboratory and at the research stations for fiscal years 1995 through 2005. While financial support for wood utilization research and product development at the Forest Service has fluctuated moderately during the past 11 years, Forest Service scientists and managers expressed concerns about resource constraints. They noted that increases in budget authority cover salary increases and other fixed costs, but that these increases may not be enough to cover increases in the costs of other operating expenses—such as purchasing or calibrating equipment, obtaining laboratory supplies, and traveling for research. The Forest Products Laboratory’s operating budget authority declined by about 67 percent between fiscal years 1995 and 1998 (in 2004 inflation-adjusted dollars), from about $1.95 million to $650,000; it also fluctuated within a narrow range from fiscal years 1999 to 2005, ending with $630,000. (See table 11.) Figure 4 shows changes in the dollars available for operating expenses (adjusted to 2004 dollars) in fiscal years 1995 through 2005 at the Forest Products Laboratory. Many of the scientists with whom we spoke cited instances in which fewer resources had diminished their ability to conduct research. For example, according to one scientist, he is spending less time in the laboratory because he is devoting more time to obtaining outside funding for his research work unit. Another scientist told us that his research work unit must now limit the number of wood samples from private sources that the unit has time to analyze, which it did not need to do in the past. According to Forest Service officials, due in part to funding constraints, as well as to better serve the scientific community, the Forest Products Laboratory has developed a strategic plan, and is in the process of reorganizing and consolidating its research work units and reducing the number of scientists and support staff. Table 12 shows that the total budget authority for fiscal years 1995 through 2005 for CSREES’ wood utilization research centers was about $51.2 million (which is equivalent to $54.8 million in 2004 inflation-adjusted dollars), and figure 5 illustrates that, overall, CSREES’ budget authority (adjusted for inflation) for the wood utilization research centers increased over the period. The increase in budget authority was due in part to the addition of four new wood utilization research centers, particularly when two new centers were added in fiscal year 1999; new centers were added again in fiscal years 2000 and 2004. While the increase in the number of wood utilization research centers would suggest an increased commitment to wood utilization research and product development, after adjusting for inflation, most of the centers, individually, experienced a downward trend in budget authority, as table 13 shows. (See app. IV for wood utilization research centers’ budget authority in nominal dollars over the period.) The 12 federal agencies generally rely on scientists and technology transfer specialists to transfer technologies to industry through a variety of methods, such as information dissemination, technical assistance, demonstration projects, and patents and licensing. While federal scientists are involved in some technology transfer, their primary responsibility is research; in contrast, specialists are responsible solely for technology transfer. In addition, the Forest Service has a unit dedicated to transferring the results of wood utilization research and product development: the Forest Service’s TMU. We identified a number of examples of activities that have occurred using each of the technology transfer methods, mostly from the Forest Service and CSREES wood utilization research centers. Scientists are expected to transfer the results of their work and primarily disseminate information through publications—particularly those in peer- reviewed journals—which help establish the validity of their research results. The Forest Service counts the number of articles published in these journals to assess scientists’ performance and reports this information as a performance measure for research in its annual report to Congress. Furthermore, according to Forest Service scientists, some industry officials may also read and use these journals. For example, a window and door manufacturer used the information from a journal article on the characteristics of wood from smaller trees for use in composites to develop a new and higher-value use for this wood. Instead of burning the wood as waste, the manufacturer now uses it in his products. Scientists also disseminate research results to industry through a variety of other methods, including publications that are not peer reviewed, Web sites, presentations of their work at professional meetings, and workshops. Specific examples include the following: Publications that are not peer reviewed include the Forest Service’s one- page information sheets, TechLines; technical reports; industry magazines; trade journals; and training manuals. For example, one training manual was developed after industry representatives asked a Forest Service scientist to create a publication on avoiding accidents caused by improperly constructed logging trails. Scientists also contribute to user manuals that are important to the building industry and homeowners, such as Finishes for Exterior Wood—20,000 copies sold in the past 10 years; and the Wood Mold Maintenance Manual—10,000 copies in circulation. Most of the Forest Service’s wood utilization research work units maintain Web sites that list articles or provide links to articles and contact information. For example, a research work unit in the Southern Research Station reported that 18,335 distinct users—approximately 1,528 per month—accessed its Web site in 2004, downloading 37,376 publications. Some of CSREES’ wood utilization centers also have Web sites, and some scientists have their own Web sites devoted to their wood utilization research and product development. The Forest Service’s State and Private Forestry’s Wood Education and Resource Center in West Virginia offers a grant program to transfer research results. In one instance, grant funds helped support the issuance of three newsletters informing pallet producers, shippers, and technical assistance personnel of the latest developments in implementing new international regulations. These regulations require that all pallets crossing international boundaries be treated to prevent the spread of invasive species. Additionally, three technical bulletins summarizing the results of the center’s applied research in this area were developed and distributed to an international audience. Workshops conducted by scientists for industry include the University of Minnesota’s industry-specific training on streamlined manufacturing procedures to over 75 companies, which has resulted in partnerships with 15 of them. University of Minnesota scientists reported that these partnerships have led to productivity improvements of 50 to 75 percent and cost reductions of 25 to 50 percent, with estimated financial impacts of over $750,000. Forest Service scientists have shared information through broadcasts. A radio host in Arkansas has a weekly show on forestry issues, and scientists from the Southern Research Station have discussed their research. The Forest Products Laboratory conducts “Entrepreneur Tours” in which small- to medium-size mill operators from western states tour the Forest Products Laboratory to learn about current research and how they can use it. Technology transfer specialists—at the Forest Service’s State and Private Forestry program and extension specialists and programs at universities— also play a key role in disseminating information to industry. As of February 2006, the Forest Service employed nine technology transfer specialists, who also provide other types of assistance to small businesses. Like scientists, specialists reach industry and other users through Web sites and publications—particularly those that are not peer reviewed, like trade journals, newsletters, and industry magazines. Specialists sometimes work directly with scientists to disseminate research to targeted users. For example, technology transfer specialists at Louisiana State University’s extension program publish the Dry Kiln Club newsletter, which provides updated research results from the university’s scientists on wood-drying and moisture-related wood decay to an audience of over 1,000. Extension specialists also disseminate information through targeted group education to industry and other users. This education includes short courses, continuing education courses, and workshops. Specialists often develop these courses using the results of research conducted at their university and other universities, the Forest Service, and other federal and state agencies. Specific examples include the following: Extension specialists at Virginia Tech University offered 27 short courses to industry in calendar year 2004. In one of these courses, they combined research from the College of Business with their own knowledge of wood science to teach methods for selling wood products. Extension specialists in Ohio taught a multiweek course to landowners on how to prune and manage their trees and market their products. The course was designed to help the landowners take advantage of a new pallet plant soon to be opening in their area. Extension specialists at Mississippi State’s wood utilization research center have provided logger education to over 3,000 logger firms during the past 10 years. Extension specialists at West Virginia University’s Appalachian Hardwood Center have conducted technology transfer and outreach efforts for the past 15 years. For example, in October 2004, the center hosted a log- sawing and grading workshop that focused on the efficient grading and recovery of lumber for low-grade logs. To enhance competitiveness in the region’s forest products industry, the University of Tennessee’s Forest Products Center has a wood products extension specialist who conducts workshops, issues newsletters, and takes other actions to transfer information from the CSREES wood utilization research center to industry. Technology transfer specialists also attend industry and professional conferences and meetings, where they present information and meet with industry representatives to build their networks. In addition, they disseminate information by creating directories that provide contact information for wood industries in their state. Both scientists and technology transfer specialists provide technical assistance through (1) telephone calls; (2) hands-on technical assistance; and (3) software development. Both scientists and technology transfer specialists respond to telephone calls requesting assistance from industry, consumers, and homeowners. For example, one scientist at Oregon State University estimated receiving over 200 calls per year; another scientist estimated receiving over 400. Forest Products Laboratory managers estimated that they receive 4,000 such calls per year. Scientists and technology transfer specialists also provide industry and others with hands-on technical assistance. Examples include the following: Forest Products Laboratory scientists provided technical assistance to help a small company improve its manufacturing efficiency by applying research on the fasteners and connectors used to assemble and disassemble portable flooring. This company produces flooring for the National Collegiate Athletic Association. Forest Products Laboratory scientists helped a large drumstick manufacturer solve a durability problem by developing a way to inject drumsticks with a polymer to strengthen them. Forest Products Laboratory scientists provide technical assistance by identifying wood samples for companies, as well as for private citizens. As part of this wood identification, they assist manufacturers in resolving problems they have in using different types of woods with different finishes. In 2004, they identified 600 specimens for industry, 350 specimens for government agencies, and 370 specimens of wood for the general public. For 12 years, the University of Minnesota has worked with a company to provide support in material selection, prototyping, performance testing, and market assessment and development. These efforts have helped the company introduce several new product lines in office furniture, store fixtures, and cabinet components; expand from 30 to 450 employees; and increase the company’s sales from $5 million to $50 million annually over the period. The Department of Energy offers energy assessments of facilities that manufacture wood products or produce pulp and paper, although the department requires a substantial cost investment from the company. According to the Department of Energy, these assessments have resulted in an annual savings of up to $9 million for some companies. Agencies also develop software and make it available, often for free, on Web sites. For example, a Forest Service computer program developed by researchers at the Forest Service’s Northeastern Research Station provides a realistic simulation model that allows industry to identify more efficient strategies to reduce waste in the manufacturing process. More than 700 computer program packages have been sent to industry, and follow-up telephone calls by Forest Service scientists indicate that the program is being used in planning and optimization activities by many of the recipients. Similarly, the Department of Energy’s Industrial Technologies Program provides free software tools to the forest products industry to improve energy efficiency in industry processes. Agencies also transfer research results through demonstration or pilot projects in mills, plants, and on-site at research locations. Specific examples include the following: The Forest Products Laboratory built a research demonstration house in 2001 on-site. The research in the demonstration house focuses on improving the use of traditional wood products, recycled and engineered wood composites, natural disaster resistance, energy efficiency, and indoor air quality. Features include a permanent wood foundation and engineered wood composites in the roof. In cooperation with the homebuilding and forest products industries, the Forest Products Laboratory constructed a house on the Washington, D.C., mall as part of the 2005 annual Smithsonian festival. The house showcases new technologies developed by the Forest Products Laboratory and cooperators, such as manufacturers of structural insulated panels. The house was visited by several thousand people over the course of the 10- day festival. Forest Products Laboratory scientists helped a company implement a demonstration project in its saw mill. The project showed that, with improvements to the company’s machinery for determining lumber quality, the company could increase efficiency by as much as 12 percent—thus adding an estimated $1.2 million annually in profit. Scientists at West Virginia University’s wood utilization research center have developed a new technology for using oak as a raw material in the manufacture of OSB. The Weyerhaeuser Company and other industry partners are testing the process and the produced strands in test runs to verify the results. If successful, the research work unit anticipates lower raw material costs and increased use of oak as an engineered wood product component. Success could lead to new or expanded OSB manufacturing facilities, and new jobs, in the Appalachian region. Forest Service scientists at the Southwest Wildland/Urban Interface and Forest Health Restoration research work unit, in Flagstaff, Arizona, have joined with Northern Arizona University on framing techniques using small-diameter logs. This partnership has led to a demonstration project with the Navajo Nation to develop hogans using small-diameter wood. Hogans are traditional housing structures tribes still use, and are typically built with more costly wood from larger trees. HUD, through its Partnership for Advancing Technology in Housing program, helped a builder in North Carolina to demonstrate the durability and cost of various building materials (including insulated composite wood panels) in four residential duplex units. The builder agreed to build each duplex out of a different building material, and HUD is evaluating the materials’ performance at this site. The Office of Naval Research has several demonstration projects in place using wood-plastic composite materials to replace wooden pier components, such as deck boards and fendering components. Such demonstrations help Navy engineers become familiar with new technologies and their benefits before the technologies are widely available. The Coast Guard, in a contract with the University of Maine for composite wood research, requires the university to demonstrate that the composite structures it developed could be used in a marine environment and be more durable than traditional structures. The university will build a dock for the Coast Guard to demonstrate the use of the composite material it has developed. Technology can also be transferred to industry through licensing and patenting. The Forest Service employs one full-time patent attorney, stationed at the Forest Products Laboratory, to assist scientists in patenting inventions they create as part of federally sponsored research projects; industry can then license these patents. The Forest Service Patent and Licensing Program handles all aspects of patents and licensing, including reviewing invention disclosures, filing and prosecuting patent applications, negotiating patent licenses and other technology transfer- related agreements. Between January 1, 1995, and December 3, 2005, a total of 58 patents were issued, and 12 applications related to wood utilization are currently pending, according to the Forest Service. Scientists at the CSREES wood utilization centers also obtain patents on processes and products they have developed. For example, scientists at the University of Minnesota’s wood utilization research center have obtained over 20 patents that they have then licensed to private industry. These patents include those for extracting chemicals from birch bark that can be used in medicine, in manufacturing absorbent panels, and in a foam-and-wood composite log used for siding. They also reported having a number of pending patent applications in the areas of housing systems and the extraction of natural chemicals from birch bark waste products. The Forest Service has a unit dedicated to transferring the results of wood utilization research and product development activities—the TMU, part of the State and Private Forestry Program, located at the Forest Products Laboratory. TMU’s mission is to improve wood utilization by transferring technologies developed primarily by the Forest Products Laboratory and other Forest Service research units. As of February 2006, TMU employed four technology transfer specialists with expertise in wood utilization and product development. These specialists collaborate with Forest Service scientists, primarily at the Forest Products Laboratory, to provide technical assistance to local governments, private landowners, rural communities, and forest industries to ensure the ready adoption of technologies based on forest materials. Like scientists and other technology transfer specialists, TMU’s specialists disseminate research results through publications, conferences, and workshops. Specific examples include the following: In fiscal years 2004 and 2005, TMU reported distributing 40,000 and 6,900 publications, respectively. For example, TMU’s newsletter, the Forest Products Conservation and Recycling Review, has a circulation of over 800. In fiscal year 2005, it published 19 issues of TechLines on topics ranging from the outdoor performance of wood-plastic composites; to wood flooring and roofing; to using waste wood for filtering water. TMU participated in 45 workshops, conferences, presentations, training sessions, and exhibits in fiscal year 2004 that were attended, in total, by over 5,000. In 2004, TMU cosponsored the SmallWood conference in Sacramento, California, that was attended by over 350, including harvesting contractors, rural development officials, community leaders, forest products business owners, environmental groups, and tribes. TMU provided an updated software tool that allows users to compare the unit costs of various heating fuels—the Fuel Value Calculator—allowing wood to be compared to conventional fossil fuels, such as natural gas or fuel oil. The calculator is available on TMU’s Web site. In addition, since TMU’s technology transfer specialists are located on-site with Forest Products Laboratory scientists, they have an opportunity to learn about the research from its early stages. Furthermore, when a technology is developed, the specialists can work with the scientists to conduct a market analysis to determine potential applications. For example, in 2004, TMU published Assessing the Market Potential of Roundwood Recreational Buildings, which provides information on the applicability of the Forest Products Laboratory’s research on roundwood. TMU also transfers technology to users by providing technical assistance directly to industry, communities, and individuals nationwide, as well conducting demonstration projects. Specifically, TMU specialists perform the following activities: Answer numerous phone inquiries and letters, and host visitors—over 2,000 in both fiscal years 2004 and 2005. Specialists provide answers to technical questions, point a user to key information sources, or provide a link and contact information to researchers working in a user’s area of interest. Travel to facilities to provide hands-on advice and answer questions. For example, TMU assisted a remote California logging community hard- hit by mill closures to create over 100 new jobs through a small forest products company and a nonprofit training center. Applying Forest Products Laboratory research, TMU specialists helped the company specialize in producing flooring from small-diameter trees by, among other things, providing solutions to product imperfections like warping and discoloration. Work with companies and communities in implementing research results or new technology through pilot and demonstration projects. For example, TMU staff are working with the Department of Energy’s National Renewable Energy Laboratory on a project testing small-scale biomass modular units, called “BioMax 15s,” that use wood chips to create electricity. The technology is still in the pre-commercial phase, so the department and the TMU are using a demonstration program at several sites around the country, including a high school in Walden, Colorado, and a furniture-making business at the Zuni Pueblo in New Mexico. In addition to its technology transfer responsibilities, in fiscal year 2005, the unit led the evaluation of proposals for USDA’s Woody Biomass Grant Program. This program made available over $4 million in grants designed to increase the utilization of woody biomass from or near National Forest System lands. The program is designed to improve forest restoration activities by using and creating markets for small-diameter material and low-valued trees that were removed during activities to reduce hazardous fuels. Grants could range in value from $50,000 to $250,000. We provided a draft of this report for review and comment to USDA’s CSREES, Forest Service, and Natural Resources and Conservation Service; Defense; Department of Energy; Department of Homeland Security; HUD; Interior; Department of Transportation; and the National Science Foundation. The Forest Service, DOT, Energy, and Interior provided technical comments, which we incorporated as appropriate. CSREES, Natural Resources and Conservation Service, Defense, Department of Homeland Security, HUD, and the National Science Foundation did not have comments on the draft report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 7 days after the date of this letter. At that time, we will send copies of this report to interested congressional committees; the Secretaries of Agriculture, Defense, Energy, Homeland Security, Housing and Urban Development, Interior, and Transportation; the Director of the National Science Foundation; the Director of the Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 3841 or nazzaror@gao.gov. Contact points for our Offices of Congressional Relations and of Public Affairs may be found on the last page of this report. GAO staff who made major contributors to this report are listed in appendix V. This report describes (1) the types of wood utilization research and product development activities supported by federal agencies and how these efforts are coordinated; (2) the level of support federal agencies made available for these activities in fiscal years 2004 and 2005, and changes in the level of support at the U.S. Department of Agriculture’s Forest Service and at the Cooperative State Research, Education, and Extension Service (CSREES)-funded wood utilization research centers from fiscal years 1995 through 2005; and (3) how the federal government transfers technologies and products from its wood utilization research and product development activities to industry. For this review, we defined wood utilization research and product development as those activities that occur from harvesting the wood through the recycling of wood and paper products. To better understand the focus of the federal research and development efforts in wood utilization, we worked with Forest Service and CSREES program officials to develop the following five broad categories: (1) harvesting—using scientific and engineering principles to ensure cost-effective, environmentally acceptable, and safe forest operations, including planning, road building, harvesting, handling and processing, and transportation; (2) wood properties—studying the basic and applied physical, chemical, and mechanical properties of wood and wood fiber to determine the suitability of this material for various uses, from pulp to structural beams to recycled composite products; (3) manufacturing and processing—new and better manufacturing ways to extract, reduce, and convert virgin wood raw materials to useful products and the development of technology to allow the re-use of materials and products to the maximum extent possible; (4) products and testing—developing test methods and gathering and evaluating data on the differing uses of wood and wood fiber products; and (5) economics and marketing. This final category includes evaluating and tracking domestic and international supply and demand trends, and trade policies, and markets, including market opportunities; and harvesting and production costs for alternative material and energy inputs and processing options. We performed our work at 12 federal agencies that support wood utilization research and product development activities. These include CSREES, the Forest Service, and the Natural Resources Conservation Service; the Department of Defense’s (Defense) Army, Army Corps of Engineers, and the Office of Naval Research; the Department of Energy; the Department of Homeland Security’s Coast Guard; the Department of Housing and Urban Development (HUD); the Department of the Interior’s (Interior) Bureau of Indian Affairs; the National Science Foundation; and the Department of Transportation. To answer the first objective—describing the types of wood utilization research and product development activities supported by federal agencies and how these efforts are coordinated—we collected information on research and product development activities at the 12 agencies for fiscal years 2004 and 2005 and worked with the Forest Service and CSREES to place these activities into one of the five categories we had developed. Because certain Forest Service research work units and CSREES-funded wood utilization research centers are specifically dedicated to wood utilization research and product development, we collected data on research activities for fiscal years 1995 through 2005 to understand how these activities changed over time. At the Forest Service, we used a data collection instrument to systematically gather data on the 27 research work units’ plans for wood utilization research and product development, covering fiscal years 1995 through 2005. Because these plans span multiple years, some dated back as far as 1988. In total, we examined the 71 plans for the 16 research work units at the Forest Products Laboratory and 11 research work units that were associated with other research stations within the Forest Service—4 in the Northeast, 4 in the South, 1 in the Pacific Northwest, 1 in the Pacific Southwest, and 1 in the Rocky Mountains. From these plans, we collected information on each research work unit’s mission, research problems, and selected research activities. (See app. II.) We also interviewed each research work unit’s project leader on the unit’s wood utilization research and product development activities. For CSREES, we examined the 10 wood utilization research centers at 12 universities that receive congressional committee-directed grants for wood utilization research and product development. Nine of these centers are at the universities of Alaska Southeast, Minnesota-Duluth, Maine, and Tennessee; Michigan State University, Mississippi State University, North Carolina State University, Oregon State University, and West Virginia University; and the tenth center is divided among three universities— Idaho State, Montana State, and Washington State—that participate in the Inland Northwest Forest Products Research Consortium. To identify these centers’ wood utilization research and product development activities, we obtained copies of the research proposals that the centers submit annually to CSREES. We used a data collection instrument to (1) systematically review the 88 proposals for fiscal years 1995 through 2005; (2) obtain information on the research objectives, approach, and description of wood utilization research and product development activities; and (3) summarize selected activities for reporting purposes. We also obtained information on the centers’ research activities from CSREES’ Current Research Information System (CRIS) to obtain concise, nontechnical descriptions of selected activities and to ensure that the CRIS summary reflected the information in the CSREES proposals. We interviewed knowledgeable agency officials regarding the reliability of data we used from CSREES’ CRIS database and compared selected CRIS data with grant files. We used the data from CSREES for descriptive purposes only, and determined that the data were sufficiently reliable for these purposes. For reporting purposes, we primarily relied on the CRIS summary information to describe the selected research activities presented in appendix III. To identify other CSREES wood utilization research and product development activities in fiscal years 2004 and 2005, CSREES officials queried the CRIS database using key search codes to identify the wood utilization research and product development activities being conducted under other CSREES-funded grant programs. At the time of our review, the CRIS database did not contain complete information for fiscal year 2005. We reviewed the grant projects—104—that fell within our definition of wood utilization research and product development. To collect information on wood utilization research and product development from the remaining 10 agencies, we interviewed agency officials and reviewed and summarized available information on the research activities for fiscal years 2004 and 2005. To obtain information on the coordination of wood utilization and product development activities among the 12 federal agencies, we interviewed agency officials to obtain their views on the use of informal and formal coordination mechanisms. For all agencies, we obtained this information through interviews with program officials and scientists. In the case of CSREES wood utilization research centers, we obtained this information through a data collection instrument sent to the program leader at each center. In addition, we obtained documents on selected formal coordinating mechanisms, such as interagency agreements. We also attended the “Agenda 2020” meeting sponsored by the Forest Service in 2005, which is held annually to exchange information between industry and Forest Service scientists performing wood utilization research and product development activities. The Forest Service uses these meetings to seek industry views on research results and future research needs. We also examined relevant laws, regulations, and agency polices related to coordination for wood utilization research and product development. To address the second objective—describe the level of support federal agencies made available for wood utilization research and product development activities in fiscal years 2004 and 2005, and changes in the level of support at the Forest Service and CSREES wood utilization research centers from fiscal years 1995 through 2005—we collected budget authority or expenditure information from the 12 agencies for fiscal years 2004 and 2005, and from the Forest Service and CSREES’ wood utilization centers for fiscal years 1995 through 2005. We reported dollars in either budget authority or expenditure data, depending on the availability of agency data. We analyzed these data in both nominal (actual) dollars and dollars adjusted for inflation (real). Most agencies and programs received congressional committee-directed budget authority for wood utilization research and product development or allocated a portion of their budget authority for these activities. Those budget authority amounts are reported when available. However, the only data available for the other CSREES grants and for the National Science Foundation were expenditure data. For information on CSREES’ budget authority for the wood utilization research centers for fiscal years 1995 to 2005 for the grants awarded to the wood utilization research centers, the CSREES official explained how the funds were allocated across the 10 wood utilization research centers over the 11-year period. These data were used to show the historical trends of investment dollars for wood utilization research and product development over the past 11 years. (See app. IV.) In addition to the budget authority for the CSREES wood utilization research centers, we obtained expenditure data for the wood utilization research and product development activities conducted under the authority of the McIntyre–Stennis Act, the Hatch Act, the National Research Initiative, the Small Business Innovation Research Grants, and other small grants, which can fund wood utilization research and product development. We obtained specific expenditure amounts for these activities for fiscal year 2004 from the CRIS database system. Fiscal year 2005 data were not available for these CSREES activities. For the Forest Service, we obtained information on budget authority from an internal agency review of research stations and research work units. We used this information to provide an overview of the changes in budget authority for wood utilization research and product development for fiscal years 1995 through 2005. See appendix IV for the budget authority for each research work unit over this period. In addition, we interviewed Forest Service budget officials in headquarters, the Forest Products Laboratory, and the State and Private Forestry Program on budget and other funding issues, such as the allocation of funds and setting of research funding priorities. We concluded that the data provided in the agency survey were sufficiently reliable for the purposes of our review. We also reviewed and summarized information from Forest Service documents on the number of scientists and research support staff at the Forest Service—the only agency that has full-time federal employees who directly conduct wood utilization research and product development activities. We reported the number of full-time equivalent (FTE) staff at each of the 27 research work units that conducted research on wood utilization and product development for fiscal years 1995 through 2005. (See app. IV.) To collect funding information from the remaining agencies, we asked budget and program officials for budget authority or expenditure information for fiscal years 2004 and 2005 for wood utilization research and product development. Specifically, the National Science Foundation provided us with expenditure information from its Project Reports Summary and Search and Awards databases because that is the only way it could identify the amounts devoted to wood utilization research and product development. We interviewed knowledgeable agency officials regarding the reliability of these data. We used the data for descriptive purposes only, and determined that the data were sufficiently reliable for these purposes. The funding for Defense’s Army, Corps of Engineers, and Office of Naval Research; and the Department of Homeland Security’s Coast Guard were congressional committee-directed funds or budget authority. However, for the Office of Naval Research and the Coast Guard, we reported expenditures because those amounts were applicable to our time period— fiscal years 2004 and 2005. To respond to objective three—how the federal government transfers technologies and products from its wood utilization research and product development activities to industry—we obtained and reviewed relevant legislation and policies and procedures on federal technology transfer activities. At the Forest Service, we interviewed and obtained examples of successful technology transfer from project leaders at the 27 research work units that are responsible for wood utilization research and product development; a patent attorney; technology transfer program managers at the Technology Marketing Unit located at the Forest Products Laboratory; and technology transfer specialists in the State and Private Forestry Program. At CSREES, we had discussions with program research officials and extension specialists. In addition, we sent a short data collection instrument to the 10 wood utilization research centers to obtain information on how they transfer the results of their research to industry, as well as to obtain examples of successful transfer efforts. We did not assess the success of these agencies’ reported efforts, nor did we try to quantify the results of these efforts. We also conducted site visits at a limited number of federal, university, and industrial facilities—the Forest Products Laboratory; Forest Service facilities in Virginia, West Virginia, and Oregon; the wood utilization research center at Oregon State University; the Western Wood Producers Association; the APA Engineered Wood Association; and a Weyerhaeuser Company research laboratory in Washington State. We also visited a sawmill, a manufacturer of wooden steps and stair posts, a manufacturer of engineered products, and a cabinet maker, and attended the 2005 Northeast Utilization and Marketing Council’s conference. We performed our work between February 2005 and May 2006, in accordance with generally accepted government auditing standards. This appendix presents examples of work conducted and planned for the Forest Service’s research work units at the Forest Products Laboratory (table 14), and in work units associated with five research stations: Northeastern, Pacific Northwest, Pacific Southwest, Rocky Mountain, and Southern (table 15). This appendix presents information on CSREES’ wood utilization research centers, including some of their objectives, specialty areas, and research activities over 11 years—fiscal years 1995 through 2005. This center specializes in assisting the Alaska Forest Products industry through research, extension, and education activities. The consortium uses a multidisciplinary, multi-institutional approach to solving forest operations and wood utilization problems unique to the Inland Northwest region. The consortium consists of the universities of Idaho and of Montana, and Washington State University. This center specializes in all aspects of utilization concerning species indigenous to the New England area. The center specializes in sustainable hardwood utilization, with a focus on wood preservation, wood composite materials, and genetic engineering of necessary wood properties for specific product development. This center specializes in timber harvesting, transportation, and economics; lumber manufacturing and processing; wood-based composite materials; protection and preservation of wood; wood chemistry; economic evaluation; and technology transfer. This center specializes in wood machining and tooling technology. This center specializes in science, technology, and business practices that will enhance the domestic and global competitiveness of the U.S. wood products industry, especially in the western United States; this will ensure more efficient use of available wood resources. A special emphasis is placed on training future scientists, researchers, and practitioners. This center specializes in southern Appalachian hardwood utilization and manufacturing of composite materials. This center specializes in improving the utilization of upland hardwoods in Appalachian forests. This appendix presents budget authority information for the Forest Service, information on FTE scientists and support staff for the Forest Service, and budget authority information for CSREES wood utilization research centers, from fiscal years 1995 through 2005. In addition to the contact named above, Andrea Wamstad Brown, Jacqueline Adams Cook, Richard Johnson, Rebecca Shea, Jay Cherlow, Carol Herrnstadt Shulman, Jeremy Ames, and Jaelith Hall-Rivera, made key contributions to this report.
More wood is consumed every year in the United States than all metals, plastics, and masonry cement combined. To maximize their use of wood, forest product companies rely on research into new methods for using wood. At least 12 federal agencies have provided support to wood utilization research and product development activities, including the U.S. Department of Agriculture's Forest Service and Cooperative State Research, Education, and Extension Service (CSREES)-funded wood utilization research centers, which historically have specifically targeted support to these activities. GAO was asked to identify (1) the types of wood utilization research and product development activities federal agencies support and how these activities are coordinated; (2) the level of support federal agencies made available for these activities in fiscal years 2004 and 2005, and changes in the level of support at the Forest Service and at the CSREES-funded wood utilization research centers for fiscal years 1995 through 2005; and (3) how the federal government transfers the technologies and products from its wood utilization research and product development activities to industry. GAO provided a draft of this report to the 12 federal agencies for review and comment. Some of the agencies provided technical comments, which were incorporated as appropriate. Federal wood utilization research and product development span a broad spectrum of activities. These activities fall into five categories: harvesting, wood properties, manufacturing and processing, products and testing, and economics and marketing. Of the 12 federal agencies that provided support to wood utilization research and product development, only the Forest Service and the CSREES-funded wood utilization centers had activities in all five categories; although all the agencies had activities in manufacturing and processing. Coordination of these activities is both informal and formal. Scientists informally coordinate their activities by conferring with each other and sharing information at conferences and professional meetings and through publications. In some cases, coordination occurs through more formal mechanisms, such as cooperative arrangements and other joint ventures. During fiscal years 2004 and 2005, the 12 federal agencies made available at least $54 million annually for wood utilization research and product development activities, measured either in budget authority or expenditures. (Dollars are reported in either budget authority or expenditure data, depending on the availability of agency data.) The Forest Service made available about half of these funds. In addition, the Forest Service--the only agency that directly employs scientists and support staff to conduct wood utilization research and product development--reported having almost 175 full-time equivalent scientists and support staff in each of these years. For fiscal years 1995 through 2005, the Forest Service's budget authority for wood utilization research and product development activities fluctuated moderately from year-to-year (in inflationadjusted dollars). In contrast, overall, CSREES' budget authority for the wood utilization research centers increased over the period (in inflation-adjusted dollars), in part because of the addition of four new wood utilization research centers between fiscal years 1999 and 2004. To transfer technologies and products to industry, federal agencies generally rely on scientists and technology transfer specialists, who use methods such as information sharing, technical assistance, and demonstration projects. For example, applying research from the Forest Products Laboratory, Forest Service technology transfer specialists assisted a small forest products company in producing flooring from small trees by, among other things, providing solutions to product imperfections like warping and discoloration.
In the legislative history, RRGs were described as essentially insurance “cooperatives,” whose members pool funds to spread and assume all or a portion of their own commercial liability risk exposure—and who are engaged in businesses and activities with similar or related risks. Specifically, RRGs may be owned only by individuals or businesses that are insured by the RRG or by an organization that is owned solely by insureds of the RRG. In the legislative history, Congress expressed the view that RRGs had the potential to increase the availability of commercial liability insurance for businesses and reduce liability premiums, at least when insurance is difficult to obtain (during hard markets) because members would set rates more closely tied to their own claims experience. In addition, LRRA was intended to provide businesses, especially small ones, an opportunity to reduce insurance costs and promote greater competition among insurers when they set insurance rates. Because RRGs are owned by insureds that may have business assets at risk should the RRG be unable to pay claims, they would have greater incentives to practice effective risk management both in their own businesses and the RRG. The elimination of duplicative and sometimes contradictory regulation by multiple states was designed to facilitate the formation and interstate operation of RRGs. “The (regulatory) framework established by LRRA attempts to strike a balance between the RRGs’ need to be free of unjustified requirements and the public’s need for protection from insolvencies.” RRGs are not the only form of self-insurance companies; “captive insurance companies” (captives) also self-insure the risks of their owners. States can charter RRGs under regulations intended for traditional insurers or for captive insurers. Non-RRG captives largely exist solely to cover the risks of their parent, which can be one large company (pure captive) or a group of companies (group captives). Group captives share certain similarities with RRGs because they also are composed of several companies, but group captives, unlike RRGs, do not have to insure similar risks. Further, captives may provide property coverage, which RRGs may not. Regulatory requirements for captives generally are less restrictive than those for traditional insurance companies because, for example, many pure captives are wholly owned insurance subsidiaries of a single business or organization. If a pure captive failed, only the assets of the parent would be at risk. Finally, unlike captive RRGs, other captive insurers generally cannot conduct insurance transactions in any state except their domiciliary state, unless they become licensed in that other state (just as a traditional company would) and subject to that state’s regulatory oversight. In contrast to the single-state regulation that LRRA provides for RRGs, traditional insurers, as well as other non-RRG captive insurers, are subject to the licensing requirements and oversight of each nondomiciliary state in which they operate. The licensing process allows states to determine if an insurer domiciled in another state meets the nondomiciliary state’s regulatory requirements before granting the insurer permission to operate in its state. According to NAIC’s uniform application process, which has been adopted by all states, an insurance company must show that it meets the nondomiciliary state’s minimum statutory capital and surplus requirements, identify whether it is affiliated with other companies (that is, part of a holding company system), and submit biographical affidavits for all its officers, directors, and key managerial personnel. After licensing an insurer, regulators in nondomiciliary states can conduct financial examinations, issue an administrative cease and desist order to stop an insurance company from operating in their state, and withdraw the company’s license to sell insurance in the state. However, most state regulators will not even license an insurance company domiciled in another state to operate in their state unless the company has been in operation for several years. As reflected in each state’s “seasoning requirements,” an insurance company must have successfully operated in its state of domicile for anywhere from 1 to 5 years before qualifying to receive a license from another state. RRGs, in contrast, are required only to register with the regulator of the state in which they intend to sell insurance and provide copies of certain documents originally provided to domiciliary regulators. Although RRGs receive regulatory relief under LRRA, they still are expected to comply with certain other laws administered by the states in which they operate, but are not chartered (nondomiciliary states), and are required to pay applicable premium and other taxes imposed by nondomiciliary states. In addition to registering with other states, LRRA also imposes other requirements that offer protections or safeguards to RRG members: LRRA requires each RRG to (1) provide a plan of operation to the insurance commissioner of each state in which it plans to do business prior to offering insurance in that state, (2) provide a copy of the group’s annual financial statement to the insurance commissioner of each state in which it is doing business, and (3) submit to an examination by a nondomiciliary state regulator to determine the RRG’s financial condition, if the domiciliary state regulator has not begun or refuses to begin an examination. Nondomiciliary, as well as domiciliary states, also may seek an injunction in a “court of competent jurisdiction” against RRGs that they believe are in hazardous financial condition. In conjunction with the regulatory relief Congress granted to RRGs, it prohibited RRGs from participating in state guaranty funds, believing that this restriction would provide RRG members a strong incentive to establish adequate premiums and reserves. All states have established guaranty funds, funded by insurance companies, to pay the claims of policyholders in the event that an insurance company fails. Without guaranty fund protection, in the event an RRG becomes insolvent, RRG insureds and their claimants could be exposed to all losses resulting from claims that exceed the ability of the RRG to pay. Finally, in terms of structure, RRG and captive insurance companies bear a certain resemblance to mutual fund companies. For example, RRGs, captive insurance companies, and mutual fund companies employ the services of a management company to administer their operations. RRGs and captive insurers generally hire “captive management” companies to administer company operations, such as day-to-day operational decisions, financial reporting, liaison with state insurance departments, or locating sources of reinsurance. Similarly, a typical mutual fund has no employees but is created and operated by another party, the adviser, which contracts with the fund, for a fee, to administer operations. For example, the adviser would be responsible for selecting and managing the mutual fund’s portfolio. However, Congress recognized that the external management of mutual funds by investment advisers creates an inherent conflict between the adviser’s duties to the fund shareholders and the adviser’s interests in maximizing its own profits, a situation that could adversely affect fund shareholders. One way in which Congress addressed this conflict is the regulatory scheme established by the Investment Company Act of 1940, which includes certain safeguards to protect the interests of fund shareholders. For example, a fund’s board of directors must contain a certain percentage of independent directors—directors without any significant relationship to the advisers. RRGs have had a small but important effect on increasing the availability and affordability of commercial liability insurance, specifically for groups that have had limited access to liability insurance. According to NAIC estimates, in 2003 RRGs sold just over 1 percent of all commercial liability insurance in the United States. However, many state regulators, even those who had reservations about the regulatory oversight of RRGs, believe RRGs have filled a void in the market. Regulators from the six leading domiciliary states also observed that RRGs were important to certain groups that could not find affordable coverage from a traditional insurance company and offered RRG insureds other benefits such as tailored coverage. Furthermore, RRGs, while tending to be relatively small in size compared with traditional insurers, serve a wide variety of organizations and businesses, although the majority served the healthcare industry. Difficulties in finding affordable commercial liability insurance prompted the creation of more RRGs from 2002 through 2004 than in the previous 15 years. Three-quarters of the RRGs formed in this period responded to a recent shortage of, and high prices for, medical malpractice insurance. However, studies have characterized the medical malpractice industry as volatile because of the risks associated with providing this line of insurance. RRGs have constituted a very small part of the commercial liability market. According to NAIC estimates, in 2003 a total of 115 RRGs sold 1.17 percent of all commercial liability insurance in the United States. This accounted for about $1.8 billion of a total of $150 billion in gross premiums for all commercial liability lines of insurance. We are focusing on 2003 market share to match the time frame of our other financial analyses of gross premiums. While RRGs’ share of the commercial liability market was quite small, market share and the overall amount of business RRGs wrote increased since 2002. For example, RRG market share increased from 0.89 percent in 2002 to 1.46 percent in 2004. However, in terms of commercial liability gross premiums, the increase in the amount of business written by RRGs is more noticeable. The amount of business that RRGs collectively wrote about doubled, from $1.2 billion in 2002 to $2.3 billion in 2004. During this same period, the amount of commercial liability written by traditional insurers increased by about 21 percent, from $129 billion to $156 billion. In addition, RRGs increased their presence in the market for medical malpractice insurance. From 2002 through 2004, the amount of medical malpractice written by RRGs increased from $497 million to $1.1 billion, which increased their share of the medical malpractice market from 4.04 percent to 7.27 percent. Despite the relatively small share of the market that RRGs hold, most state regulators we surveyed who had an opinion—33 of 36—indicated that RRGs have expanded the availability and affordability of commercial liability insurance for groups that otherwise would have had difficulty in obtaining coverage. This consistency of opinion is notable because 18 of those 33 regulators made this assertion even though they later expressed reservations about the adequacy of LRRA’s regulatory safeguards. About one-third of the 33 regulators also made more specific comments about the contributions of RRGs. Of these, five regulators reported that RRGs had expanded the availability of medical malpractice insurance for nursing homes, adult foster care homes, hospitals, and physicians. One regulator also reported that RRGs had assisted commercial truckers in meeting their insurance needs. Regulators from states that had domiciled the most RRGs as of the end of 2004—Arizona, the District of Columbia, Hawaii, Nevada, South Carolina, and Vermont—provided additional insights. Regulators from most of these states recognized that the overall impact of RRGs in expanding the availability of insurance was quite small. However, they said that the coverage RRGs provided was important because certain groups could not find affordable insurance from a traditional insurance company. All of these regulators cited medical malpractice insurance as an area where RRGs increased the affordability and availability of insurance but they also identified other areas. For example, regulators from Hawaii and Nevada reported that RRGs have been important in addressing a shortage of insurance for construction contractors. The six regulators all indicated (to some extent) that by forming their own insurance companies, RRG members also could control costs by designing insurance coverage targeted to their specific needs and develop programs to reduce specific risks. In contrast, as noted by the Arizona regulator, traditional insurers were likely to take a short-term view of the market, underpricing their coverage when they had competition and later overpricing their coverage to recoup losses. He also noted insurers might exit a market altogether if they perceived the business to be unprofitable, as exemplified in the medical malpractice market. Regulators from Vermont and Hawaii, states that have the most experience in chartering RRGs, added that successful RRGs have members that are interested in staying in business for the “long haul” and are actively involved in running their RRGs. RRG representatives added that RRG members, at any given time, might not necessarily benefit from the cheapest insurance prices but could benefit from prices that were stable over time. Additionally, as indicated by trade group representatives, including the National Risk Retention Association, RRGs have proved especially advantageous for small and midsized businesses. In order to obtain more specific information about how RRGs have benefited their membership, we interviewed representatives of and reviewed documents supplied by six RRGs that have been in business for more than 5 years, as well as two more recently established RRGs. Overall, these eight RRGs had anywhere from 2 to more than 14,500 members. They provided coverage to a variety of insureds, including educational institutions, hospitals, attorneys, and building contractors. The following three examples illustrate some of the services and activities RRGs provide or undertake. An RRG that insures about 1,100 schools, universities, and related organizations throughout the United States offers options tailored to its members, such as educators’ legal liability coverage and coverage for students enrolled in courses offering off-campus internships. According to an RRG representative, the RRG maintains a claims database to help it accurately and competitively price its policies. Members also benefit from risk-management services, such as training and courses on sexual harassment and tenure litigation, and work with specialists to develop loss-control programs. An RRG that reported that it insures 730 of the nation’s approximately 3,000 public housing authorities provides coverage for risks such as pesticide exposure, law enforcement liability, and lead-based paint liability. The RRG indicated that while premium rates have fluctuated, they are similar to prices from about 15 years ago. The RRG also offers risk-management programs, such as those for reducing fires, and also reported that as a result of conducting member inspections it recently compiled more than 2,000 recommendations on how to reduce covered risks. An RRG that primarily provides insurance to about 45 hospitals in California and Nevada offers general and professional coverage such as personal and bodily injury and employee benefit liability. The RRG also offers a variety of risk-management services specifically aimed at reducing losses and controlling risks in hospitals. According to an RRG official, adequately managing risk within the RRG has allowed for more accurate pricing of the liability coverage available to members. Generally, RRGs have remained relatively small compared with traditional insurers. Based on our analysis of 2003 financial data submitted to NAIC, 47 of the 79 RRGs (almost 60 percent) that had been in business at least 1 year, wrote less than $10 million in gross premiums, whereas only 644 of 2,392 traditional insurers (27 percent) wrote less than $10 million. In contrast, 1,118 traditional insurers (almost 47 percent) wrote more than $50 million in gross premiums for 2003 compared with six RRGs (8 percent). Further, these six RRGs (all of which had been in business for at least 1 year) accounted for 52 percent of all gross premiums that RRGs wrote in 2003. This information suggests that just a few RRGs account for a disproportionate amount of the RRG market. Additionally, RRGs that wrote the most business tended to have been in business the longest. For example, as measured by gross premiums written, of the 16 RRGs that sold more than $25 million annually, 14 had been in business 5 years or more (see fig. 1). Yet, the length of time an RRG has been in operation is not always the best predictor of an RRG’s size. For example, of the 51 RRGs that had been in business for 5 or more years, 27 still wrote $10 million or less in gross premiums. According to the Risk Retention Reporter (RRR), a trade journal that has covered RRGs since 1986, RRGs insure a wide variety of organizations and businesses. According to estimates published in RRR, in 2004 105 RRGs (more than half of the 182 in operation at that time) served the healthcare sector (for example, hospitals, nursing homes, and doctors). In 1991, RRGs serving physicians and hospitals accounted for about 90 percent of healthcare RRGs. However, by 2004, largely because of a recent increase in nursing homes forming RRGs, this percentage decreased to about 74 percent. In addition, in 2004, 21 RRGs served the property development area (for example, contractors and homebuilders), and 20 served the manufacturing and commerce area (for example, manufacturers and distributors). Other leading business areas that RRGs served include professional services (for example, attorneys and architects), and government and institutions (for example, educational and religious institutions). Figure 2 shows how the distribution of RRGs by business area has changed since 1991. Additionally, according to RRR’s estimates, almost half of all RRG premiums collected in 2004 were in the healthcare area (see fig. 3). The professional services and government and institutions business areas accounted for the second and third largest percentage of estimated gross premiums collected, respectively. In looking at other characteristics of RRGs, according to an NAIC analysis, the average annual failure rate for RRGs was somewhat higher than the average annual failure rate for all other property and casualty insurers. Between 1987 and 2003, the average annual failure rate for RRGs was 1.83 percent compared with the 0.78 percent failure rate for property and casualty insurers. Over this period, NAIC determined that a total of 22 RRGs failed, with between no and five RRGs failing each year. In comparison, NAIC determined that a total of 385 traditional insurers failed, with between 5 and 57 insurance companies failing each year. Although the difference in failure rates was statistically significant, it should be noted that the comparison may not be entirely parallel. NAIC compared RRGs that can sell only commercial liability insurance to businesses with insurers that can sell all lines of property and casualty (liability) for commercial and personal purposes. Moreover, because NAIC included all property- casualty insurers, no analysis was done to adjust for size and longevity. In creating RRGs, companies and organizations are generally responding to market conditions. As the availability and affordability of insurance decreased (creating a “hard” market), some insurance buyers sought alternatives to traditional insurance and turned to RRGs. In response, more RRGs formed from 2002 through 2004 than in the previous 15 years (1986–2001). This increase is somewhat similar in magnitude to an increase that occurred in 1986–1989 in response to an earlier hard market for insurance (see fig. 4). The 117 RRGs formed from January 1, 2002, through December 31, 2004, represent more than half of all RRGs in operation as of December 31, 2004. More specifically, RRGs established to provide medical malpractice insurance accounted for most of the increase in RRG numbers in 2002– 2004. Healthcare providers sought insurance after some of the largest medical malpractice insurance providers exited the market because of declining profits, partly caused by market instability and high and unpredictable losses—factors that have contributed to the high risks of providing medical malpractice insurance. From 2002 through 2004, healthcare RRGs accounted for nearly three-fourths of all RRG formations. Further, 105 RRGs were insuring healthcare providers as of the end of 2004, compared with 23 in previous years (see again fig. 2). These RRGs serve a variety of healthcare providers. For example, during 2003, 23 RRGs formed to insure hospitals and their affiliates, 13 formed to insure physician groups, and 11 formed to insure long-term care facilities, including nursing homes and assisted living facilities. However, the dramatic increase in the overall number of RRGs providing medical malpractice insurance may precipitate an increase in the number of RRGs vulnerable to failure. Studies have characterized the medical malpractice insurance industry as volatile because the risks of providing medical malpractice insurance are high. Finally, many of the recently formed healthcare-related RRGs are selling insurance in states where medical malpractice insurance rates for physicians have increased the most. For example, since April 30, 2002, the Pennsylvania Insurance Department has registered 32 RRGs to write medical malpractice products. In addition, since the beginning of 2003, the Texas Department of Insurance has registered 15 RRGs to write medical malpractice insurance, more than the state had registered in the previous 16 years. Other states where recently formed RRGs were insuring doctors include Illinois and Florida, states that have also experienced large increases in medical malpractice insurance premium rates. LRRA’s regulatory preemption has allowed states to set regulatory requirements that differ significantly from those of traditional insurers, and from each other, producing limited confidence among regulators in the regulation of RRGs. Many of the differences arise because some states allow RRGs to be chartered as captive insurance companies, which typically operate under a set of less restrictive rules than traditional insurers. As a result, RRGs generally domicile in those states that permit their formation as captive insurance companies, rather than in the states in which they conduct most of their business. For example, RRGs domiciled as captive insurers usually can start their operations with smaller amounts of capital and surplus than traditional insurance companies, use letters of credit to meet minimum capitalization requirements, or meet fewer reporting requirements. Regulatory requirements for captive RRGs vary among states as well, in part because regulation of RRGs and captives are not subject to uniform, baseline standards, such as the NAIC accreditation standards that define a state’s regulatory structure for traditional companies. As one notable example, states do not require RRGs to follow the same accounting principles when preparing their financial reports, making it difficult for some nondomiciliary state regulators, as well as NAIC analysts, to reliably assess the financial condition of RRGs. Regulators responding to our survey also expressed concern about the lack of uniform, baseline standards. Few (eight) indicated that they believed LRRA’s regulatory safeguards and protections, such as the right to file a suit against an RRG in court, were adequate. Further, some regulators suggested that some domiciliary states were modifying their regulatory requirements and practices to make it easier for RRGs to domicile in their state. We found some evidence to support these concerns based on differences among states in minimum capitalization requirements, willingness to charter RRGs to insure parties that sell extended service contracts to consumers, or willingness to charter RRGs primarily started by service providers, such as management companies, rather than insureds. Regulatory requirements for captive insurers are generally less restrictive than those for traditional insurers and offer RRGs several financial advantages. For example, captive laws generally permit RRGs to form with smaller amounts of required capitalization (capital and surplus), the minimum amount of initial funds an insurer legally must have to be chartered. While regulators reported that their states generally require traditional insurance companies to have several millions of dollars in capital and surplus, they often reported that RRGs chartered as captives require no more than $500,000. In addition, unlike requirements for traditional insurance companies, the captive laws of the six leading domiciliary states permit RRGs to meet and maintain their minimum capital and surplus requirements in the form of an irrevocable letter of credit (LOC) rather than cash. According to several regulators that charter RRGs as captives, LOCs may provide greater protection to the insureds than cash when only the insurance commissioner can access these funds. The insurance commissioner, who would be identified as the beneficiary of the LOC, could present the LOC to the bank and immediately access the cash, but a representative of the RRG could not. However, other state regulators questioned the value of LOCs because they believed cash would be more secure if an RRG were to experience major financial difficulties. One regulator noted that it becomes the regulator’s responsibility, on a regular basis, to determine if the RRG is complying with the terms of the LOC. In addition, in response to our survey, most regulators from states that would charter RRGs as captives reported that RRGs would not be required to comply with NAIC’s risk-based capital (RBC) requirements. NAIC applies RBC standards to measure the adequacy of an insurer’s capital relative to their risks. Further, RRGs chartered as captives may not be required to comply with the same NAIC financial reporting requirements, such as filing quarterly and annual reports with NAIC, that regulators expect traditional insurance companies to meet. For example, while the statutes of all the leading domiciliary states require RRGs chartered as captives to file financial reports annually with their insurance departments, as of July 2004, when we conducted our survey, the statutes of only half the leading domiciliary states—Hawaii, South Carolina, and Vermont—explicitly require that these reports also be provided to NAIC on an annual basis. In addition, when RRGs are chartered as captive insurance companies they may not have to comply with the chartering state’s statutes regulating insurance holding company systems. All 50 states and the District of Columbia substantially have adopted such statutes, based on NAIC’s Model Insurance Holding Company System Regulatory Act. As in the model act, a state’s insurance holding company statute generally requires insurance companies that are part of holding company systems and doing business in the state to register with the state and annually disclose to the state insurance regulator all the members of that system. Additionally, the act requires that transactions among members of a holding company system be on fair and reasonable terms, and that insurance commissioners be notified of and given the opportunity to review certain proposed transactions, including reinsurance agreements, management agreements, and service contracts. For 2004, NAIC reviewed RRG annual reports and identified 19 RRGs that reported themselves as being affiliated with other companies (for example, their management and reinsurance companies). However, since only two of the six leading domiciliary states, Hawaii, and to some extent South Carolina, actually require RRGs to comply with this act, we do not know whether more RRGs could be affiliated with other companies. The Hawaii regulator said that RRGs should abide by the act’s disclosure requirements so that regulators can identify potential conflicts of interests with service providers, such as managers or insurance brokers. Unless an RRG is required to make these disclosures, the regulator would have the added burden of identifying and evaluating the nature of an RRG’s affiliations. He added that such disclosures are important because the individual insureds of an RRG, in contrast to the single owner of a pure captive, may not have the ability to control potential conflicts of interest between the insurer and its affiliates. (See the next section of this report for examples of how affiliates of an RRG can have conflicts of interest with the RRG.) Because of these regulatory advantages, RRGs are more likely to domicile in states that will charter them as captives than in the states where they sell insurance. Figure 5 shows that 18 states could charter RRGs as captives. The figure also shows that most RRGs have chosen to domicile in six states—Arizona, the District of Columbia, Hawaii, Nevada, South Carolina, and Vermont—all of which charter RRGs as captives and market themselves as captive domiciles. Of these states, Vermont and Hawaii have been chartering RRG as captives for many years, but Arizona, the District of Columbia, Nevada, South Carolina, and five additional states have adopted their captive laws since 1999. In contrast to an RRG chartered as a captive, a true captive insurer generally does not directly conduct insurance transactions outside of its domiciliary state. However, states of domicile are rarely the states in which RRGs sell much, or any, insurance. According to NAIC, 73 of the 115 RRGs active in 2003 did not write any business in their state of domicile, and only 10 wrote more than 30 percent of their business in their state of domicile. The states in which RRGs wrote most of their business in 2003—Pennsylvania ($238 million), New York ($206 million), California ($156 million), Massachusetts ($98 million)—did not charter any RRGs. Texas, which chartered only one RRG, had $87 million in direct written premiums written by RRGs. For more information on the number of RRGs chartered by state and the amount of direct premiums written by RRGs, see figure 6. The current regulatory environment for RRGs, characterized by the lack of uniform, baseline standards, offers parallels to the earlier solvency regulation of multistate traditional insurers. Uniformity in solvency regulation for multistate insurers is important, provided the regulation embodies best practices and procedures, because it strengthens the regulatory system across all states and builds trust among regulators. After many insurance companies became insolvent during the 1980s, NAIC and the states recognized the need for uniform, baseline standards, particularly for multistate insurers. To alleviate this situation, NAIC developed its Financial Regulation Standards and Accreditation Program (accreditation standards) in 1989 and began the voluntary accreditation of most state regulators in the 1990s. Prior to accreditation, states did not uniformly regulate the financial solvency of traditional insurers, and many states lacked confidence in the regulatory standards of other states. By becoming accredited, state regulators demonstrated that they were willing to abide by a common set of solvency standards and practices for the oversight of the multistate insurers chartered by their state. As a result, states currently generally defer to an insurance company’s domiciliary state regulator, even though each state retains the authority, through its licensing process, to regulate all traditional insurance companies selling in the state. NAIC’s accreditation standards define baseline requirements that states must meet for the regulation of traditional companies in three major areas: First, they include minimum standards for the set of laws and regulations necessary for effective solvency regulation. Second, they set minimum standards for practices and procedures, such as examinations and financial analysis, which regulators routinely should do. Third, they establish expectations for resource levels and personnel practices, including the amount of education and experience required of professional staff, within an insurance department. However, NAIC does not have a similar set of regulatory standards for regulation of RRGs, which also are multistate insurers. According to NAIC officials, when the accreditation standards originally were developed, relatively few states were domiciling RRGs as captive insurers, and the question of standards for the regulation of captives and RRGs did not materialize until NAIC began its accreditation review of Vermont in 1993. NAIC completely exempted the regulation of captive insurers from the review process but included RRGs because, unlike pure captives, RRGs have many policyholders and write business in multiple states. NAIC’s accreditation review of Vermont lasted about 2 years and NAIC and Vermont negotiated an agreement that only part of the accreditation standards applied to RRGs. As a result of the review, NAIC determined that RRGs were sufficiently different from traditional insurers so that the regulatory standards defining the laws and regulations necessary for effective solvency regulation should not apply to RRGs. However, NAIC and Vermont did not develop substitute standards to replace those they deemed inappropriate. Subsequently, other states domiciling RRGs as captives also have been exempt from enforcing the uniform set of laws and regulations deemed necessary for effective solvency regulation under NAIC’s accreditation standards. As a result, some states chartering RRGs as captives do not obligate them, for example, to adopt a common set of financial reporting procedures and practices, abide by NAIC’s requirements for risk-based capital, or comply with requirements outlined in that state’s version of NAIC’s Model Insurance Holding Company System Regulatory Act. In contrast, while NAIC’s standards for the qualifications of an insurance department’s personnel apply to RRGs, they do not distinguish between the expertise needed to oversee RRGs and traditional insurance companies. Because half of the 18 states that are willing to charter RRGs as captives have adopted captive laws since 1999, few domiciliary state insurance departments have much experience regulating RRGs as captive insurance companies. Further, in response to our 2004 survey, only three states new to chartering captives—Arizona, the District of Columbia, and South Carolina—reported that they have dedicated certain staff to the oversight of captives. However, the State of Nevada later reported to us that it dedicated staff to the oversight of captives as of June 2005. The importance of standards that address regulator education and experience can be illustrated by decisions made by state insurance departments or staff relatively new to chartering RRGs. In 1988, Vermont chartered Beverage Retailers Insurance Co. Risk Retention Group (BRICO). Launched and capitalized by an outside entity, BRICO did not have a sufficient number of members as evidenced by the need for an outside entity to provide the capital. It failed in 1995 in large part because it wrote far less business than originally projected and suffered from poor underwriting. Further, according to regulators, BRICO began to write business just as the market for its product softened, and traditional licensed insurers began to compete for the business. As a result, the Vermont regulators said that Vermont would not charter RRGs unless they had a sufficient number of insureds at start-up to capitalize the RRG and make its future operations sustainable. More recently, in 2000, shortly after it adopted its captive statutes, South Carolina chartered Commercial Truckers Risk Retention Group Captive Insurance Company. This RRG, which also largely lacked members at inception, failed within a year because it had an inexperienced management team, poor underwriting, and difficulties with its reinsurance company. The regulators later classified their experience with chartering this RRG, particularly the fact that the RRG lacked a management company, as “lessons learned” for their department. Finally, as reported in 2004, the Arizona insurance department inadvertently chartered an RRG that permitted only the brokerage firm that formed and financed the RRG to have any ability to control the RRG through voting rights. The Arizona insurance department explained that they approved the RRG’s charter when the insurance department was operating under an acting administrator and that the department would make every effort to prevent similar mistakes. According to NAIC officials, RRGs writing insurance in multiple states, like traditional insurers, would benefit from the adoption of uniform, baseline standards for state regulation, and they plan gradually to develop them. NAIC representatives noted that questions about the application of accreditation standards related to RRGs undoubtedly would be raised again because several states new to domiciling RRGs will be subject to accreditation reviews in the next few years. However, the representatives also noted, that because the NAIC accreditation team can review the oversight of only a few of the many insurance companies chartered by a state, the team might not select an RRG. As discussed previously, states domiciling RRGs as captives are not obligated to require that RRGs meet a common set of financial reporting procedures and practices. Moreover, even among states that charter RRGs as captives, the financial reporting requirements for RRGs vary. Yet, the only requirement under LRRA for the provision of financial information to nondomiciliary regulators is that RRGs provide annual financial statements to each state in which they operate. Further, since most RRGs sell the majority of their insurance outside their state of domicile, insurance commissioners from nondomiciliary states may have only an RRG’s financial reports to determine if an examination may be necessary. As we have reported in the past, to be of use to regulators, financial reports should be prepared under consistent accounting and reporting rules and provided in a timely manner that results in a fair presentation of the insurer’s true financial condition. One important variation in reporting requirements is the use by RRGs of accounting principles that differ from those used by traditional insurance companies. The statutes of the District of Columbia, Nevada, South Carolina, and Vermont require their RRGs to use GAAP; Hawaii requires RRGs to use statutory accounting principles (SAP); and Arizona permits RRGs to use either. The differences in the two sets of accounting principles reflect the different purposes for which each was developed and each produces a different—and not necessarily comparable—financial picture of a business. In general, SAP is designed to meet the needs of insurance regulators, the primary users of insurance financial statements, and stresses the measurement of an insurer’s ability to pay claims (remain solvent) in order to protect insureds. In contrast, GAAP provides guidance that businesses follow in preparing their general purpose financial statements, which provide users such as investors and creditors with useful information that allows them to assess a business’ ongoing financial performance. However, inconsistent use of accounting methodologies by RRGs could affect the ability of nondomiciliary regulators to determine the financial condition of RRGs, especially since regulators are used to assessing traditional insurers that must file reports using SAP. In addition, the statutes of each of the six domiciliary states allow RRGs, like other captive insurers, to modify whichever accounting principles they use by permitting the use of letters of credit (LOC) to meet statutory minimum capitalization requirements. Strictly speaking, neither GAAP nor SAP would permit a company to count an undrawn LOC as an asset because it is only a promise of future payment—the money is neither readily available to meet policyholder obligations nor is it directly in the possession of the company. In addition to allowing LOCs, according to a review of financial statements by NAIC, the leading domiciliary states that require RRGs to file financial statements using GAAP also allow RRGs to modify GAAP by permitting them to recognize surplus notes under capital and surplus. This practice is not ordinarily permitted by GAAP. A company filing under GAAP would recognize a corresponding liability for the surplus note and would not simply add it to the company’s capital and surplus. See appendix III for more specific information on the differences between SAP and GAAP, including permitted modifications, and how these differences could affect assessments of a company’s actual or risk-based capital. Variations in the use of accounting methods have consequences for nondomiciliary regulators who analyze financial reports submitted by RRGs and illustrate some of the regulatory challenges created by the absence of uniform standards. Most nondomiciliary states responding to our survey of all state regulators indicated that they performed only a limited review of RRG financial statements. To obtain more specific information about the impact of these differences, we contacted the six states—Pennsylvania, California, New York, Massachusetts, Texas, and Illinois—where RRGs collectively wrote almost half of their business in 2003 (see fig. 6). Regulators in Massachusetts and Pennsylvania reported that they did not analyze the financial reports and thus had no opinion about the impact of the accounting differences, but three of the other four states indicated that the differences resulted in additional work. Regulators from California and Texas told us that the use of GAAP, especially when modified, caused difficulties because insurance regulators were more familiar with SAP, which they also believed better addressed solvency concerns than GAAP. The regulator from Illinois noted that RRG annual statements were not marked as being filed based on GAAP and, when staff conducted their financial analyses, they took the time to disregard assets that would not qualify as such under SAP. The Texas regulator reported that, while concerned about the impact of the differences, his department did not have the staffing capability to convert the numbers for each RRG to SAP and, as a result, had to prioritize their efforts. Further, NAIC staff reported that the use by RRGs of a modified version of GAAP or SAP distorted the analyses they provided to state regulators. One of NAIC’s roles is to help states identify potentially troubled insurers operating in their state by analyzing insurer financial reports with computerized tools to identify statistical outliers or other unusual data. In the past, we have noted that NAIC’s solvency analysis is an important supplement to the overall solvency monitoring performed by states and can help states focus their examination resources on potentially troubled companies. NAIC uses Financial Analysis Solvency Tools (FAST), such as the ratios produced by the Insurance Regulatory Information System (IRIS) and the Insurer Profile Reports, to achieve these objectives and makes the results available to all regulators through a central database. However, NAIC analysts reported that differing accounting formats undermined the relative usefulness of these tools because the tools were only designed to analyze data extracted from financial reports based on SAP. Similarly, when we attempted to analyze some aspects of the financial condition of RRGs to compare them with traditional companies, we found that information produced under differing accounting principles diminished the usefulness of the comparison (see app. III). The lack of uniform, baseline regulatory standards for the oversight of RRGs contributed to the concerns of many state regulators, who did not believe the regulatory safeguards and protections built into LRRA (such as requiring RRGs to file annual financial statements with regulators and allowing regulators to file suit if they believe the RRG is financially unsound) were adequate. Only 8 of 42 regulators who responded to our survey question about LRRA’s regulatory protections indicated that they thought the protections were adequate (see fig. 7). Eleven of the 28 regulators who believed that the protections were inadequate or very inadequate focused on the lack of uniform, regulatory standards or the need for RRGs to meet certain minimum standards—particularly for minimum capital and surplus levels. In addition, 9 of the 28 regulators, especially those from California and New York, commented that they believed state regulators needed additional regulatory authority to supervise the RRGs in their states. While RRGs, like traditional insurers, can sell in any or all states, only the domiciliary regulator has any significant regulatory oversight. In addition, the regulators from the six leading domiciliary states—Arizona, the District of Columbia, Hawaii, Nevada, South Carolina, and Vermont— did not agree on the adequacy of LRRA safeguards. For example, while the regulators from the District of Columbia, Hawaii, Nevada, and Vermont thought the protections adequate, the regulator from South Carolina reported that LRRA’s safeguards were “neither adequate nor inadequate” because LRRA delegates the responsibility of establishing safeguards to domiciliary states, which can be either stringent or flexible in establishing safeguards. The other leading domiciliary state—Arizona—had not yet formed an opinion on the adequacy of LRRA’s provisions. The regulator from Hawaii also noted that the effectiveness of the LRRA provisions was dependent upon the expertise and resources of the RRG’s domiciliary regulator. While many regulators did not believe LRRA’s safeguards were adequate, few indicated that they had availed themselves of the tools LRRA does provide nondomiciliary state regulators. These tools include the ability to request that a domiciliary state undertake a financial examination and the right to petition a court of “competent jurisdiction” for an injunction against an RRG believed to be in a hazardous financial condition. Recent cases involving state regulation of RRGs typically have centered on challenges to nondomiciliary state statutes that affect operations of the RRGs, rather than actions by nondomiciliary states challenging the financial condition of RRGs selling insurance in their states. Finally, in response to another survey question, nearly half of the regulators said they had concerns that led them to contact domiciliary state regulators during the 24 months preceding our survey, but only five nondomiciliary states indicated that they had ever asked domiciliary states to conduct a financial examination. However, according to the survey, many state regulators availed themselves of the other regulatory safeguards that LRRA provides—that RRGs submit to nondomiciliary states feasibility or operational plans before they begin operations in those states and thereafter a copy of the same annual financial statements that the RRG submits to its domiciliary state. Almost all the state regulators indicated that they reviewed these documents to some extent, although almost half of the state regulators indicated that they provided these reports less review than those submitted by other nonadmitted insurers. In addition, nine states indicated that RRGs began to conduct business in their states before supplying them with copies of their plans of operations or feasibility studies, but most indicated that these occurrences were occasional. Similarly, 15 states identified RRGs that failed to provide required financial statements for review, but most of these regulators indicated that the failure to file was an infrequent occurrence. Some regulators, including those from New York, California, and Texas— states where RRGs collectively wrote about 26 percent of all their business but did not domicile—expressed concerns that domiciliary states were lowering their regulatory standards to attract RRGs to domicile in their states for economic development purposes. They sometimes referred to these practices as the “regulatory race to the bottom.” RRGs, like other captives, can generate revenue for a domiciliary state’s economy when the state taxes RRG insurance premiums or the RRG industry generates jobs in the local economy. The question of whether domiciliary states were competing with one another essentially was moot until about 1999, when more states began adopting captive laws. Until then, Vermont and Hawaii were two of only a few states that were actively chartering RRGs and through 1998 had chartered about 55 percent of all RRGs. However, between the beginning of 1999 and the end of 2004, they had chartered only 36 percent of all newly chartered RRGs. The six leading domiciliary states actively market their competitive advantages on Web sites, at trade conferences, and through relationships established with trade groups. They advertise the advantages of their new or revised captive laws and most describe the laws as “favorable”; for example, by allowing captives to use letters of credit to meet their minimum capitalization requirements. Most of these states also describe their corporate premium tax structure as competitive and may describe their staff as experienced with or committed to captive regulation. Vermont emphasizes that it is the third-largest captive insurance domicile in the world and the number one in the United States, with an insurance department that has more than 20 years of experience in regulating RRGs. South Carolina, which passed its captive legislation in 2000, emphasizes a favorable premium tax structure and the support of its governor and director of insurance for its establishment as a domicile for captives. Arizona describes its state as “business friendly,” highlighting the lack of premium taxes on captive insurers and the “unsurpassed” natural beauty of the state. However, in addition to general marketing, some evidence exists to support the concern that the leading domiciliary states are modifying policies and procedures to attract RRGs. We identified the following notable differences among the states, some of which reflect the regulatory practices and approaches of each state and others, statute: Willingness to domicile vehicle service contract (VSC) providers: Several states, including California, New York, and Washington, questioned whether RRGs consisting of VSC providers should even qualify as RRGs and are concerned about states that allow these providers to form RRGs. VSC providers issue extended service contracts for the costs of future repairs to consumers (that is, the general public) who purchase automobiles. Until 2001, almost all of these RRGs were domiciled in Hawaii but after that date, all the new RRGs formed by VSC providers have domiciled in the District of Columbia and South Carolina. The Hawaii regulator said that the tougher regulations it imposed in 2001 (requiring that RRGs insuring VSC providers annually provide acceptable proof that they were financially capable of meeting VSC claims filed by consumers) dissuaded these providers from domiciling any longer in Hawaii. In addition, one of the leading domiciliary states, Vermont, refuses to domicile any of these RRGs because of the potential risk to consumers. Consumers who purchase these contracts, not just the RRG insureds, can be left without coverage if the RRG insuring the VSC provider’s ability to cover VSC claims fails. (We discuss RRGs insuring service contract providers and consequences to insureds and consumers more fully later in this report.) Statutory minimum capitalization requirements: Differences in the minimum amount of capital and surplus (capitalization) each insurer must have before starting operations make it easier for smaller RRGs to domicile in certain states and reflect a state’s attitude towards attracting RRGs. For example, in 2003, Vermont increased its minimum capitalization amount from $500,000 to $1 million—according to regulators, to ensure that only RRGs that are serious prospects, with sufficient capital, apply to be chartered in the state. On the other hand, effective in 2005, the District of Columbia lowered its minimum capitalization amount for a RRG incorporated as a stock insurer (that is, owned by shareholders who hold its capital stock) from $500,000 to $400,000 to make it easier for RRGs to charter there. Corporate forms: In 2005, one of the six leading domiciliary states—the District of Columbia—enacted legislation that permits RRGs to form “segregated accounts.” The other leading domiciliary states permit the formation of segregated accounts or “protected cells” for other types of captives but not for their RRGs. According to the District’s statute, a captive insurer, including an RRG, may fund separate accounts for individual RRG members or groups of members with common risks, allowing members to segregate a portion of their risks from the risks of other members of the RRG. According to the District regulator, RRG members also would be required to contribute capital to a common account that could be used to cover a portion of each member’s risk. The District regulator also noted that the segregated cell concept has never been tested in insolvency; as a result, courts have not yet addressed the concept that the cells are legally separate. Willingness to charter entrepreneurial RRGs: RRGs may be formed with only a few members, with the driving force behind the formation being, for example, a service provider, such as the RRG’s management company or a few members. These RRGs are referred to as “entrepreneurial” RRGs because their future success is often contingent on recruiting additional members as insureds. In 2004, South Carolina regulators reported they frequently chartered entrepreneurial RRGs to offset what they described as the “chicken and egg” problem—their belief that it can be difficult for RRGs to recruit new members without having the RRG already in place. Regulators in several other leading domiciliary states have reported they would be willing to charter such RRGs if their operational plans appeared to be sound but few reported having done so. However, regulators in Vermont said that they would not charter entrepreneurial RRGs because they often were created to make a profit for the “entrepreneur,” rather than helping members obtain affordable insurance. (We discuss entrepreneurial RRGs later in the report.) Finally, the redomiciling of three RRGs to two of the leading domiciliary states, while subject to unresolved regulatory actions in their original state of domicile, also provides some credibility to the regulators’ assertions of “regulatory arbitrage.” In 2004, two RRGs redomiciled to new states while subject to regulatory actions in their original states of domicile. One RRG, which had been operating for several years, redomiciled to a new state before satisfying the terms of a consent order issued by its original domiciliary state and without notifying its original state of domicile. Although the RRG satisfied the terms of the consent order about 3 months after it redomiciled, the regulator in the original domiciliary state reported that, as provided by LRRA, once redomiciled, the RRG had no obligation to do so. The second RRG, one that had been recently formed, was issued a cease and desist order by its domiciliary state because the regulators had questions about who actually owned and controlled the RRG. As in the first case, the original domiciliary state regulator told us that this RRG did not advise them that it was going to redomicile and, once redomiciled, was under no legal obligation to satisfy the terms of the cease and desist order. The redomiciling, or rather liquidation, of the third RRG is more difficult to characterize because its original state of domicile (Hawaii) allowed it to transfer some of its assets to a new state of domicile (South Carolina) after issuing a cease and desist order to stop it from selling unauthorized insurance products directly to the general public, thereby violating the provisions of LRRA. More specifically, Hawaii allowed the RRG to transfer its losses and related assets for its “authorized” lines of insurance to South Carolina and required the Hawaiian company to maintain a $1 million irrevocable LOC issued in favor of the insurance commissioner until such time as the “unauthorized” insurance matter was properly resolved. South Carolina permitted the owners of these assets to form a new RRG offering a similar line of coverage and use a name virtually identical to its predecessor in Hawaii. Had these RRGs been chartered as traditional insurance companies, they would not have had the ability to continue operating in their original state of domicile after redomiciling in another state without the original state’s express consent. Because traditional companies must be licensed in each state in which they operate, the original state of domicile would have retained its authority to enforce regulatory actions. Because LRRA does not comprehensively address how RRGs may be owned, controlled, or governed, RRGs may be operated in ways that do not consistently protect the best interests of their insureds. For example, while self-insurance is generally understood as risking one’s own money to cover losses, LRRA does not specify that RRG members, as owners, make capital contributions beyond their premiums or maintain any degree of control over their governing bodies (such as boards of directors). As a result, in the absence of specific federal requirements and using the latitude LRRA grants them, some leading domiciliary regulators have not required all RRG insureds to make at least some capital contribution or exercise any control over the RRG. Additionally, some states have allowed management companies or a few individuals to form what are called “entrepreneurial” RRGs. Consequently, some regulators were concerned that RRGs were being chartered primarily for purposes other than self-insurance, such as making a profit for someone other than the collective insureds. Further, LRRA does not recognize that separate companies typically manage RRGs. Yet, past RRG failures suggest that sometimes management companies have promoted their own interests at the expense of the insureds. Although LRRA does not address governance issues such as conflicts of interest between management companies and insureds, Congress previously has enacted safeguards to address similar issues in the mutual fund industry. Finally, some of these RRG failures have resulted in thousands of insureds and their claimants losing coverage, some of whom may not have been fully aware that their RRG lacked state insurance insolvency guaranty fund coverage or the consequences of lacking such coverage. While RRGs are a form of self-insurance on a group basis, LRRA does not require that RRG insureds make a capital investment in their RRG and provides each state considerable authority to establish its own rules on how RRGs will be chartered and regulated. Most of the regulators from the leading domiciliary states reported that they require RRGs to be organized so that all insureds make some form of capital contribution but other regulators do not, or make exceptions to their general approach. Regulators from Vermont and Nevada emphasized that it was important for each member to have “skin in the game,” based on the assumption that members who make a contribution to the RRG’s capital and surplus would have a greater interest in the success of the RRG. The regulator from Nevada added that if regulators permitted members to participate without making a capital contribution, they were defeating the spirit of LRRA. However, another of the leading domiciliary states, the District of Columbia, does not require insureds to make capital contributions as a condition of charter approval and has permitted several RRGs to be formed accordingly. The District regulator commented that LRRA does not require such a contribution and that some prospective RRG members may not have the financial ability to make a capital contribution. Further, despite Vermont’s position that RRG members should make a capital contribution, the Vermont regulators said they occasionally waive this requirement under special circumstances; for example, if the RRG was already established and did not need any additional capital. In addition, several of the leading domiciliary states, including Arizona, the District of Columbia, and Nevada, would consider allowing a nonmember to provide an LOC to fund the capitalization of the RRG. However, as described by several regulators, including those in Hawaii and South Carolina, even when members do contribute capital to the RRG, the amount contributed can vary and be quite small. For instance, an investor with a greater amount of capital, such as a hospital, could initially capitalize an RRG, and expect smaller contributions from members (for example, doctors) with less capital. Or, in an RRG largely owned by one member, additional members might be required only to make a token investment, for example, $100 or less. As a result, an investment that small would be unlikely to motivate members to feel like or behave as “owners” who were “self-insuring” their risks. LRRA also does not have a requirement that RRG insureds retain control over the management and operation of their RRG. However, as discussed previously, the legislative history indicates that some of the act’s single- state regulatory framework and other key provisions were premised not only on ownership of an RRG being closely tied to the interests of the insureds, but also that the insureds would be highly motivated to ensure proper management of the RRG. Yet, in order to make or direct key decisions about a company’s operations, the insureds would have to be able to influence or participate in the company’s governing body (for example, a board of directors). A board of directors is the focal point of an insurer’s corporate governance framework and ultimately should be responsible for the performance and conduct of the insurer. Governance is the manner in which the boards of directors and senior management oversee a company, including how they are held accountable for their actions. Most leading state regulators said they expect members of RRGs to exert some control over the RRG by having the ability to vote for directors, even though these rights sometimes vary in proportion to the size of a member’s investment in the RRG or by share class. Most of the leading state regulators generally define “control” to be the power to direct the management and policies of an RRG as exercised by an RRG’s governing body, such as its board of directors. However, regulators from the District of Columbia asserted that they permit RRGs to issue nonvoting shares to their insureds because some members are capable of making a greater financial contribution than others and, in exchange for their investment, will seek greater control over the RRG. The regulators noted that allowing such arrangements increases the availability of insurance and has no adverse effect on the financial solvency of the RRG. Further, the District of Columbia permits nonmembers (that is, noninsureds) to appoint or vote for directors. In addition, we found that even regulators who expect all RRG members to have voting rights (that is, at a minimum a vote for directors) sometimes make exceptions. For example, an RRG domiciled in Vermont was permitted to issue shares that did not allow insureds to vote for members of the RRG’s governing body. The Vermont regulators reported that the attorney forming the RRG believed issuing the shares was consistent with the department’s position that RRG members should have “voting rights” because under Vermont law all shareholders are guaranteed other minimal voting rights. While most regulators affirmed that they expect RRG members to own and control their RRGs, how these expectations are fulfilled is less clear when an organization, such as an association, owns an RRG. Four states— Arizona, District of Columbia, South Carolina, and Vermont—reported that they have chartered RRGs that are owned by a single or multiple organizations, rather than individual persons or businesses. One of these states—the District of Columbia—permits noninsureds to own the organizations that formed the RRG. However, the District regulator said that while the noninsureds may own the voting or preferred stock of the association, they do not necessarily have an interest in controlling the affairs of the RRG. In addition, Arizona has permitted three risk purchasing groups (RPGs) to own one RRG. While the three RPGs, organized as domestic corporations in another state, collectively have almost 8,000 policyholders, four individuals, all of whom are reported to be RRG insureds by the Arizona regulator, are the sole owners of all three RPGs. The chartering of an “entrepreneurial” RRG—which regulators generally define as formed by an individual member or a service provider, such as a management company, for the primary purpose of making profits for themselves—has been controversial. According to several regulators, entrepreneurial RRGs are started with a few members and need additional members to remain viable. The leading domiciliary regulators have taken very different positions on entrepreneurial RRGs, based on whether they thought the advantages entrepreneurs could offer (obtaining funding and members) outweighed the potential adverse influence the entrepreneur could have on the RRG. We interviewed regulators from the six leading domiciliary states to obtain their views on entrepreneurial RRGs. In 2004, South Carolina regulators reported they firmly endorsed chartering entrepreneurial RRGs because they believed that already chartered RRGs stand a better chance of attracting members than those in the planning stages. They cited cases of entrepreneurial RRGs they believe have met the insurance needs of nursing homes and taxicab drivers. However, regulators from Vermont and Hawaii had strong reservations about this practice because they believe the goal of entrepreneurs is to make money for themselves—and that the pursuit of this goal could undermine the financial integrity of the RRG because of the adverse incentives that it creates. Vermont will not charter entrepreneurial RRGs and has discouraged them from obtaining a charter in Vermont by requiring RRGs (before obtaining their charter) to have a critical mass of members capable of financing their own RRG. In addition, the Vermont regulators said they would not permit an entrepreneur, if just a single owner, to form an RRG as a means of using LRRA’s regulatory preemption to bypass the licensing requirements of the other states in which it planned to operate. Two of the other leading domiciliary states—Arizona and Nevada—were willing to charter entrepreneurial RRGs, providing they believed that the business plans of the RRGs were sound. Finally, some of the leading state regulators that have experience with chartering entrepreneurial RRGs told us that they recognized that the interests of the RRG insureds have to be protected and that they took measures to do so. For example, the regulators from South Carolina said that even if one member largely formed and financed an RRG, they would try to ensure that the member would not dominate the operations. However, they admitted that the member could do so because of his or her significant investment in the RRG. Alternatively, the regulator from Hawaii reported that the state’s insurance division, while reluctant to charter entrepreneurial RRGs, would do so if the RRG agreed to submit to the division’s oversight conditions. For example, to make sure service providers are not misdirecting money, the division requires entrepreneurial RRGs to submit copies of all vendor contracts. The Hawaii regulator also told us that the insurance division requires all captives to obtain the insurance commissioner’s approval prior to making any distributions of principal or interest to holders of surplus notes. However, he concluded that successful oversight ultimately depended on the vigilance of the regulator and the willingness of the RRG to share documentation and submit to close supervision. LRRA imposes no governance requirements that could help mitigate the risk to RRG insureds from potential abuses by other interests, such as their management companies, should they choose to maximize their profits at the expense of the best interests of the RRG insureds. Governance rules enhance the independence and effectiveness of governing bodies, such as boards of directors, and improve their ability to protect the interests of the company and insureds they serve. Unlike a typical company where the firm’s employees operate and manage the firm, an RRG usually is operated by a management company and may have no employees of its own. However, while management companies and other service providers generally provide valuable services to RRGs, the potential for abuse arises if the interests of a management company are not aligned with the interests of the RRG insureds to consistently obtain self-insurance at the most affordable price consistent with long-term solvency. These inherent conflicts of interest are exemplified in the circumstances surrounding 10 of 16 RRG failures that we examined. For example, members of the companies that provided management services to Charter Risk Retention Group Insurance Company (Charter) and Professional Mutual Insurance Company Risk Retention Group (PMIC) also served as officers of the RRGs’ boards of directors, which enabled them to make decisions that did not promote the welfare of the RRG insureds. In other instances, such as the failure of Nonprofits Mutual Risk Retention Group, Inc. (Nonprofits), the management company negotiated terms that made it difficult for the RRG to terminate its management contract and place its business elsewhere. Regulators knowledgeable about these and other failures commented that the members, while presumably self-insuring their risks, were probably more interested in satisfying their need for insurance than actually running their own insurance company. The 2003 failure of three RRGs domiciled in Tennessee—American National Lawyers Insurance Reciprocal Risk Retention Group (ANLIR), Doctors Insurance Reciprocal Risk Retention Group (DIR), and The Reciprocal Alliance Risk Retention Group (TRA)—further illustrates the potential risks and conflicts of interest associated with a management company operating an RRG. In pending litigation, the State of Tennessee’s Commissioner of Commerce and Insurance, as receiver for the RRGs, has alleged that the three RRGs had common characteristics, such as (1) being formed by Reciprocal of America (ROA), a Virginia reciprocal insurer, which also served as the RRGs’ reinsurance company; (2) having a management company, The Reciprocal Group (TRG), which also served as the management company and attorney-in-fact for ROA; (3) receiving loans from ROA, TRG, and their affiliates; and (4) having officers and directors in common with ROA and TRG. The receiver has alleged that through the terms of RRGs’ governing instruments, such as its bylaws, management agreements with TRG (which prohibited the RRGs from replacing TRG as their exclusive management company for as long as the loans were outstanding), and the common network of interlocking directors among the companies, TRG effectively controlled the boards of directors of the RRGs in a manner inconsistent with the best interests of the RRGs and their insureds. As alleged in the complaint filed by the Tennessee regulator, one such decision involved a reinsurance agreement, in which the RRGs ceded 90-100 percent of their risk to ROA with a commensurate amount of premiums—conditions that according to the regulator effectively prevented the RRGs from ever operating independently or retaining sufficient revenue to pay off their loans with ROA and TRG and thus remove TRG as their management company. Within days after the Commonwealth of Virginia appointed a receiver for the rehabilitation or liquidation of ROA and TRG, the State of Tennessee took similar actions for the three RRGs domiciled in Tennessee. The following failures of other RRGs also illustrate behavior suggesting that management companies and affiliated service providers have promoted their own interests at the expense of the RRG insureds: According to the Nebraska regulators, Charter failed in 1992 because its managers, driven to achieve goals to maximize their profits, undercharged on insurance rates in an effort to sell more policies. One board officer and a company manager also held controlling interests in third-party service providers, including the one that determined if claims should be paid. Further, the board officer and a company manager, as well as the RRG, held controlling interests in the RRG’s reinsurance company. A Nebraska regulator noted that when a reinsurance company is affiliated with the insurer it is reinsuring: (1) the reinsurer’s incentive to encourage the insurer to adequately reserve and underwrite is reduced and (2) the insurer also will be adversely affected by any unprofitable risk it passes to the reinsurer. PMIC, which was domiciled in Missouri and formed to provide medical malpractice insurance coverage for its member physicians, was declared insolvent in 1994. The RRG’s relationship with the companies that provided its management services undermined the RRG in several ways. The president of PMIC was also the sole owner of Corporate Insurance Consultants (CIC), a company with which PMIC had a marketing service and agency agreement. As described in the RRG’s examination reports, the RRG paid CIC exorbitant commissions for services that CIC failed to provide, but allowed CIC to finance collateral loans made by the reinsurance company to CIC. In turn, CIC had a significant ownership stake in the RRG’s reinsurance company, which also provided PMIC with all of its personnel. The reinsurer’s own hazardous financial condition resulted in the failure of PMIC. In the case of Nonprofits, Vermont regulators indicated that essentially the excessive costs of its outsourced management company and outsourced underwriting and claims operations essentially contributed to its 2000 failure. The regulators said that the management company was in a position to exert undue influence over the RRG’s operations because the principals of the management company loaned the RRG its start-up capital in the form of irrevocable LOCs. In addition to charging excessive fees, the management company also locked the RRG into a management contract that only allowed the RRG to cancel the contract 1 year before its expiration. If the RRG did not, the contract would automatically renew for another 5 years, a requirement of which the RRG insureds said they were unaware. Although LRRA has no provisions that address governance controls, Congress has acted to provide such controls in similar circumstances in another industry. In response to conditions in the mutual fund industry, Congress passed the Investment Company Act of 1940 (1940 Act). The 1940 Act, as implemented by the Securities and Exchange Commission (SEC), establishes a system of checks and balances that includes participation of independent directors on mutual fund boards, which oversee transactions between the mutual fund and its investment adviser. A mutual fund’s structure and operation, like that of an RRG, differs from that of a traditional corporation. In a typical corporation, the firm’s employees operate and manage the firm; the corporation’s board of directors, elected by the corporation’s stockholders, oversees its operation. Unlike a typical corporation, but similar to many RRGs, a typical mutual fund has no employees and contracts with another party, the investment adviser, to administer the mutual fund’s operations. Recognizing that the “external management” of most mutual funds presents inherent conflicts between the interests of the fund shareholders and those of the fund’s investment adviser, as well as potential for abuses of fund shareholders, Congress included several safeguards in the 1940 Act. For example, with some exceptions, the act requires that at least 40 percent of the board of directors of a mutual fund be disinterested (that is, that directors be independent of the fund’s investment adviser as well as certain other persons having significant or professional relationships with the fund) to help ensure that the fund is managed in the best interest of its shareholders. The 1940 Act also regulates the terms of contracts with investment advisers by imposing a maximum contract term and by guaranteeing the board’s and the shareholders’ ability to terminate an investment adviser contract. The act also requires that the terms of any contract with the investment adviser and the renewal of such contract be approved by a majority of directors who are not parties to the contract or otherwise interested persons of the investment adviser. Further, the 1940 Act imposes a fiduciary duty upon the adviser in relation to its level of compensation and provides the fund and its shareholders with the right to sue the adviser should the fees be excessive. The management controls imposed on mutual fund boards do not supplant state law on duties of “care and loyalty” that oblige directors to act in the best interests of the mutual fund, but enhance a board’s ability to perform its responsibilities consistent with the protection of investors and the purposes of the 1940 Act. In addition to lacking comprehensive provisions for ownership, control, and governance of RRGs, LRRA does not mandate that RRGs disclose to their insureds that they lack state insurance insolvency guaranty fund protection. LRRA’s legislative history indicates that the prohibition on RRGs participating in state guaranty funds (operated to protect insureds when traditional insurers fail) stemmed, in part, from a belief that the lack of protection would help motivate RRG members to manage the RRG prudently. LRRA does provide nondomiciliary state regulators the authority to mandate the inclusion of a specific disclosure, which informs RRG insureds that they lack guaranty fund coverage, on insurance policies issued to residents of their state (see fig. 8). However, LRRA does not provide nondomiciliary states with the authority to require the inclusion of this disclaimer in policy applications or marketing materials. For example, of 40 RRGs whose Web sites we were able to identify, only 11 disclosed in their marketing material that RRGs lack guaranty fund protection. In addition, 11 of the RRGs omitted the words “Risk Retention Group” from their names. All of the six leading domiciliary states have adopted varying statutory requirements that RRGs domiciled in their states include the disclosure in their policies, regardless of where they operate. The statutes of Hawaii, South Carolina, and the District of Columbia require that the disclosure be printed on applications for insurance, as well as on the front and declaration page of each policy. By requiring that the disclosure be printed on insurance applications, prospective RRG insureds have a better chance of understanding that they lack guaranty fund protection. Regulators in South Carolina, based on their experience with the failure of Commercial Truckers RRG in 2001, also reported that they require insureds, such as those of transportation and trucking RRGs, to place their signature beneath the disclosure. The regulators imposed this additional requirement because they did not believe that some insureds would be as likely to understand the implications of not having guaranty fund coverage as well as other insureds (for example, hospital conglomerates). In contrast, the statutes of Arizona and Vermont require only that the disclosure be printed on the insurance policies. The six leading domiciliary state regulators had mixed views on whether the contents of the disclosure should be enhanced, but none recommended that LRRA be changed to permit RRGs to have guaranty fund protection. It is unclear whether RRG insureds who obtain insurance through organizations that own RRGs understand that they will not have guaranty fund coverage. Four states—Arizona, the District of Columbia, South Carolina, and Vermont—indicated that they have chartered RRGs owned by single organizations. When an organization is the insured of the RRG, the organization receives the insurance policy with the disclosure about lack of guaranty fund protection. Whether the organization’s members, who are insured by the RRG, understand that they lack guaranty fund coverage is less clear. The Vermont regulators indicated that members typically are not advised that they lack guaranty fund coverage before they receive the policy. Thus, the regulators recommended that applications for insurance should contain the disclosure as well. The Arizona regulator reported that the insurance applications signed by the insureds of the Arizona-domiciled RRG owned by three RPGs did not contain a disclosure on the lack of guaranty fund coverage, although the policy certificates did. Further, he reported that the practices of RPGs were beyond his department’s jurisdiction and that he does not review them. Not understanding that RRG insureds are not protected by guaranty funds has serious implications for RRG members and their claimants, who have lost coverage as a result of RRG failures. For example, of the 21 RRGs that have been placed involuntarily in liquidation, 14 either have or had policyholders whose claims remain or are likely to remain partially unpaid (see app. IV). Member reaction to the failure of the three RRGs domiciled in Tennessee further illustrates that the wording and the placement of the disclosure may be inadequate. In 2003, an insurance regulator from Virginia, a state where many of the RRG insureds resided, reported that he received about 150–200 telephone calls from the insureds of these RRGs and the insureds did not realize they lacked “guaranty fund” coverage, asking instead why they didn’t have “back-up” insurance when their insurance company failed. He explained that the insureds were “shocked” to discover they were members of an RRG, rather than a traditional insurance company and that they had no guaranty fund coverage. According to the regulator, they commented, “Who reads their insurance policies?” Regulators in Tennessee also noted that insureds of the RRGs, including attorneys, hospitals, and physicians did not appear to understand the implications of self-insuring their risks and the lack of guaranty fund coverage. In 2004, the State of Tennessee estimated that the potential financial losses from these failures to the 50,000 or so hospitals, doctors, and attorneys that were members of the Tennessee RRGs could exceed $200 million, once the amount of unpaid claims were fully known. Other regulators, including those in Missouri, in response to our survey, and New York, in an interview, also expressed concern that some RRG members might not fully understand the implications of a lack of guaranty fund protection and were not the “sophisticated” consumers that they believe may have been presumed by LRRA. In addition, in response to our survey, regulators from other states including New Mexico and Florida expressed specific concerns about third-party claimants whose claims could go unpaid when an RRG failed and the insured refused or was unable to pay claims. The Florida regulator noted that the promoters of the RRG could accentuate the “cost savings” aspect of the RRG at the expense of explaining the insured’s potential future liability in the form of unpaid claims due to the absence of guaranty funds should the RRG fail. In addition, regulators who thought that protections in LRRA were inadequate, such as those in Wyoming, Virginia, and Wisconsin, tended to view lack of guaranty fund protection as a primary reason for developing and implementing more uniform regulatory standards or providing nondomiciliary states greater regulatory authority over RRGs. Lack of guaranty fund protection also can have unique consequences for consumers who purchase extended service contracts from service contract providers. Service contract providers form RRGs to insure their ability to pay claims on extended service contracts—a form of insurance also known as contractual liability insurance—and sell these contracts to consumers. In exchange for the payment (sometimes substantial) made by the consumer, the service contract provider commits to performing services— for example, paying for repairs to an automobile. Service contract providers may be required to set aside some portion of the money paid by consumers in a funded “reserve account” to pay resulting claims and may have to buy insurance (for example, from the RRG they have joined) to guarantee their ability to pay claims. However, potential problems result from the perception of consumers that what they have purchased is insurance, since the service contract provider pays for repairs or other service, when in fact it is not. Only the service contract provider purchases insurance, the consumer signs a contract for services. The failure of several RRGs, including HOW Insurance Company RRG (HOW) in 1994 and National Warranty RRG in 2003, underscores the consequences that failures of RRGs that insure service contract providers can have on consumers: In 1994, the Commonwealth of Virginia liquidated HOW and placed its assets in receivership. This RRG insured the ability of home builders to fulfill contractual obligations incurred by selling extended service contracts to home buyers. While settled out of court, the Commonwealth of Virginia asserted that the homeowners who purchased the contracts against defects in their homes had been misled into believing that they were entitled to first-party insurance benefits— that is, payment of claims. A Virginia regulator said that while his department received few calls from the actual insureds (that is, the home builders) at the time of failure, they received many calls from home owners who had obtained extended service contracts when they purchased their home and thought they were insured directly by the RRG. In 2003, National Warranty Insurance RRG failed, leaving behind thousands of customers with largely worthless vehicle service contracts (VSCs). This RRG, domiciled in the Cayman Islands, insured the ability of service contract providers to honor contractual liabilities for automobile repairs. Before its failure, National Warranty insured at least 600,000 VSCs worth tens of millions of dollars. In 2003, the liquidators of National Warranty estimated that losses could range from $58 to $74 million. National Warranty’s failure also raised the question of whether RRGs were insuring consumers directly, which LRRA prohibits—for example, because the laws of many states, including Texas, require that the insurance company become directly responsible for unpaid claims in the event a service contract provider failed to honor its contract. The failure of National Warranty also raised the question of whether RRGs should insure service contract providers at all because of the potential direct damage to consumers. Several regulators, including those in California, Wisconsin, and Washington, went even further. In response to our survey, they opined that LRRA should be amended to preclude RRGs from offering “contractual liability” insurance because such policies cover a vehicle service contract provider’s financial obligations to consumers. At a minimum, regulators from New York and California, in separate interviews, recommended that consumers who purchase extended service contracts insured by RRGs at least be notified in writing that the contracts they purchase were not insurance and would not qualify for state guaranty fund coverage. In establishing RRGs, Congress intended to alleviate a shortage of affordable commercial liability insurance by enabling commercial entities to create their own insurance companies to self-insure their risks on a group basis. RRGs, as an industry, according to most state insurance regulators, have fulfilled this vision—and the intent of LRRA—by increasing the availability and affordability of insurance for members that experienced difficulty in obtaining coverage. While constituting only a small portion of the total liability insurance market, RRGs have had a consistent presence in this market over the years. However, the number of RRGs has increased dramatically in recent years in response to recent shortages of liability insurance. While we were unable to evaluate the merits of individual RRGs, both state regulators and advocates of the RRG industry provided specific examples of how they believe RRGs have addressed shortages of insurance in the marketplace. This ability is best illustrated by the high number of RRGs chartered over the past 3 years to provide medical malpractice insurance, a product which for traditional insurers historically has been subject to high or unpredictable losses with resulting failures. However, the regulation of RRGs by a single state, in combination with the recent increase in the number of states new to domiciling RRGs, the increase in the number of RRGs offering medical malpractice insurance, and a wide variance in regulatory practices, has increased the potential for future solvency risks. As a result, RRG members and their claimants could benefit from greater regulatory consistency. Insurance regulators have recognized the value of having a consistent set of regulatory laws, regulations, practices, and expertise through the successful implementation of NAIC’s accreditation program for state regulators of multistate insurance companies. Vermont and NAIC negotiated the relaxation of significant parts of the accreditation standards for RRGs because it was unclear how the standards, designed for traditional companies, applied to RRGs. However, this agreement allowed states chartering RRGs as captives considerable latitude in their regulatory practices, even though most RRGs were multistate insurers, raising the concerns of nondomiciliary states. With more RRGs than ever before and with a larger number of states competing to charter them, regulators, working through NAIC, could develop a set of comprehensive, uniform, baseline standards for RRGs that would provide a level of consistency that would strengthen RRGs and their ability to meet the intent of LRRA. While the regulatory structure applicable to RRGs need not be identical to that used for traditional insurance companies, uniform, baseline regulatory standards could create a more transparent and protective regulatory environment, enhancing the financial strength of RRGs and increasing the trust and confidence of nondomiciliary state regulators. These standards could include such elements as the use of a consistent accounting method, disclosing relationships with affiliated businesses as specified by NAIC’s Model Insurance Holding Company System Regulatory Act, and the qualifications and number of staff that insurance departments must have available to charter RRGs. These standards could reflect the regulatory best practices of the more experienced RRG regulators and address the concerns of the states where RRGs conduct the majority of their business. Further, such standards could reduce the likelihood that RRGs would practice regulatory arbitrage, seeking departments with the most relaxed standards. While it may not be essential for RRGs to follow all the same rules that traditional insurers follow, it is difficult to understand why all RRGs and their regulators, irrespective of where they are domiciled, should not conform to a core set of regulatory requirements. Developing and implementing such standards would strengthen the foundation of LRRA’s flexible framework for the formation of RRGs. LRRA’s provisions for the ownership, control, and governance of RRGs may not be sufficient to protect the best interests of the insureds. While acknowledging that LRRA has worked well to promote the formation of RRGs in the absence of uniform, baseline standards, this same flexibility has left some RRG insureds vulnerable to misgovernance. In particular, how RRGs are capitalized is central to concerns of experienced regulators about the chartering of entrepreneurial RRGs because a few insureds or service providers, such as management companies, that provide the initial capital also may retain control over the RRG to benefit their personal interests. Further, RRGs, like mutual fund companies, depend on management companies to manage their affairs, but RRGs lack the federal protections Congress and SEC have afforded mutual fund companies. As evidenced by the circumstances surrounding many RRG failures, the interests of management companies inherently may conflict with the fundamental interests of RRGs—that is, obtaining stable and affordable insurance. Moreover, these management companies may have the means to promote their own interests if they exercise effective control over an RRG’s board of directors. While RRGs may need to hire a management company to handle their day-to-day operations, principles drawn from legislation such as the Investment Company Act of 1940 would strongly suggest that an RRG’s board of directors would have a substantial number of independent directors to control policy decisions. In addition, these standards would strongly suggest that RRGs retain certain rights when negotiating the terms of a management contract. Yet, LRRA has no provisions that establish the insureds’ authority over management. Without these protections, RRG insureds and their third-party claimants are uniquely vulnerable to abuse because they are not afforded the oversight of a multistate regulatory environment or the benefits of guaranty fund coverage. Nevertheless, we do not believe that RRGs should be afforded the protection of guaranty funds. Providing such coverage could further reduce any incentives insureds might have to participate in the governance of their RRG and at the same time allow them access to funds supplied by insurance companies that do not benefit from the regulatory preemption. On the other hand, RRG insureds have a right to be adequately informed about the risks they could incur before they purchase an insurance policy. Further, consumers who purchase extended service contracts (which take on the appearance of insurance) from RRG insureds likewise have a right to be informed about these risks. The numerous comments that regulators received from consumers affected by RRG failures illustrate how profoundly uninformed the consumers were. Finally, while opportunities exist to enhance the safeguards in LRRA, we note again the affirmation provided by most regulators responding to our survey—that RRGs have increased the availability and affordability of insurance. That these assertions often came from regulators who also had concerns about the adequacy of LRRA’s regulatory safeguards underscores the successful track record of RRGs as a self-insurance mechanism for niche groups. However, as the RRG industry has matured, and recently expanded, so have questions from regulators about the ability of RRGs to safely insure the risks of their members. These questions emerge, especially in light of recent failures, because RRGs can have thousands of members and operations in multiple states. Thus, in some cases, RRGs can take on the appearance of a traditional insurance company—however, without the back-up oversight provided traditional insurers by other state regulators or the protection of guaranty funds. This is especially problematic because RRGs chartered under captive regulations differ from other captives—RRGs benefit from the regulatory preemption that allows multistate operation with single-state regulation. Further, we find it difficult to believe that members of RRGs with thousands of members view themselves as “owners” prepared to undertake the due diligence presumed by Congress when establishing RRGs as a self-insurance mechanism. Because there is no federal regulator for this federally created entity, all regulators, in both domiciliary and nondomiciliary states, must look to whatever language LRRA provides when seeking additional guidance on protecting the residents of their state. Thus, the mandated development and implementation of uniform, baseline standards for the regulation of RRGs, and the establishment of governance protections, could make the success of RRGs more likely. In the absence of a federal regulator to ensure that members of RRGs, which are federally established but state-regulated insurance companies, and their claimants are afforded the benefits of a more consistent regulatory environment, we recommend that the states, acting through NAIC, develop and implement broad-based, uniform, baseline standards for the regulation of RRGs. These standards should include, but not be limited to, filing financial reports on a regular basis using a uniform accounting method, meeting NAIC’s risk-based capital standards, and complying with the Model Insurance Holding Company System Regulatory Act as adopted by the domiciliary state. The states should also consider standards for laws, regulatory processes and procedures, and personnel that are similar in scope to the accreditation standards for traditional insurers. To assist NAIC and the states in developing and implementing uniform, baseline standards for the regulation of RRGs, Congress may wish to consider the following two actions: Setting a date by which NAIC and the state insurance commissioners must develop an initial set of uniform, baseline standards for the regulation of RRGs. After that date, making LRRA’s regulatory preemption applicable only to those RRGs domiciled in states that have adopted NAIC’s baseline standards for the regulation of RRGs. To strengthen the single-state regulatory framework for RRGs and better protect RRG members and their claimants, while at the same time continuing to facilitate the formation and efficient operation of RRGs, Congress also may wish to consider strengthening LRRA in the following three ways: Requiring that insureds of the RRG qualify as owners of the RRG by making a financial contribution to the capital and surplus of the RRG, above and beyond their premium. Requiring that all of the insureds, and only the insureds, have the right to nominate and elect members of the RRG’s governing body. Establishing minimum governance requirements to better secure the operation of RRGs for the benefit of their insureds and safeguard assets for the ultimate purpose of paying claims. These requirements should be similar in objective to those provided by the Investment Company Act of 1940, as implemented by SEC; that is, to manage conflicts of interest that are likely to arise when RRGs are managed by or obtain services from a management company, or its affiliates, to protect the interests of the insureds. Amendments to LRRA could require that a majority of an RRG’s board of directors consist of “independent” directors (that is, not be associated with the management company or its affiliates) and require that certain decisions presenting the most serious potential conflicts, such as approving the management contract, be approved by a majority of the independent directors; provide safeguards for negotiating the terms of the management contract— for example, by requiring periodic renewal of management contracts by a majority of the RRG’s independent directors, or a majority of the RRG’s insureds, and guaranteeing the right of a majority of the independent directors or a majority of the insureds to unilaterally terminate management contracts upon reasonable notice; and impose a fiduciary duty upon the management company to act in the best interests of the insureds, especially with respect to compensation for its services. To better educate RRG members, including the insureds of organizations that are sole owners of an RRG, about the potential consequences of self- insuring their risks, and to extend the benefits of this information to consumers who purchase extended service contracts from RRG members, Congress may wish to consider the following two actions: Expand the wording of the current disclosure to more explicitly describe the consequences of not having state guaranty fund protection should an RRG fail, and requiring that RRGs print the disclosure prominently on policy applications, the policy itself, and marketing materials, including those posted on the Internet. These requirements also would apply to insureds who obtain their insurance through organizations that may own an RRG; and Develop a modified version of the disclosure for consumers who purchase extended service contracts from providers that form RRGs to insure their ability to meet these contractual obligations. The disclosure would be printed prominently on the extended service contract application, as well as on the contract itself. We requested comments on a draft of this report from the President of the National Association of Insurance Commissioners or her designee. The Executive Vice President and CEO of NAIC said that the report was “…well thought out and well documented,” and provided “…a clear picture of how states are undertaking their responsibilities with regard to regulation of risk retention groups.” She further stated that our report “…explored the issues that are pertinent to the protection of risk retention group members and the third-party claimants that are affected by the coverage provided by the risk retention groups.” NAIC expressed agreement with our conclusions and recommendations. NAIC also provided technical comments on the report that were incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to other interested Members of Congress, congressional committees, and the Executive Vice President of NAIC and the 56 state and other governmental entities that are members of NAIC. We also will make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions on this report, please contact me at (202) 512-8678 or hillmanr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Our objectives were to (1) examine the effect risk retention groups (RRG) have had on the availability and affordability of commercial liability insurance; (2) assess whether any significant regulatory problems have resulted from the Liability Risk Retention Act’s (LRRA) partial preemption of state insurance laws; and (3) evaluate the sufficiency of LRRA’s ownership, control, and governance provisions in protecting the interests of RRG insureds. We conducted our review from November 2003 through July 2005 in accordance with generally accepted government auditing standards. Overall, we used surveys, interviews, and other methods to determine if RRGs have increased the availability and affordability of commercial liability insurance. First, we surveyed regulators in all 50 states and the District of Columbia. The survey asked regulators to respond to questions about regulatory requirements for RRGs domiciled in their state, their experiences with RRGs operating in their state, and their opinions about the impact of LRRA. We pretested this survey with five state regulators, made minor modifications, and conducted data collection during July 2004. We e-mailed the survey as a Microsoft Word attachment and received completed surveys from the District of Columbia and all the states except Maryland. Then, to obtain more specific information about how regulators viewed the usefulness of RRGs, we interviewed insurance regulators from 14 different states that we selected based on several characteristics that would capture the range of experiences regulators have had with RRGs. In addition, we interviewed representatives from eight RRGs serving different business areas and reviewed documentation they provided describing their operations and how they served their members. Second, we asked the National Association of Insurance Commissioners (NAIC) to calculate the overall market share of RRGs in the commercial liability insurance market as of the end of 2003. We used 2003 data for all financial analyses because it constituted the most complete data set available at the time of our analysis. Using its Financial Data Repository, a database containing annual and quarterly financial reports and data submitted by most U.S. domestic insurers, NAIC compared the total amount of gross premiums written by RRGs with the total amount of gross premiums generated by the sale of commercial liability insurance by all insurers. For the market share analysis, as well as for our analysis of gross premiums written by RRGs, we only included the 115 RRGs that wrote premiums during 2003. NAIC officials reported that while they perform their own consistency checks on this data, state regulators were responsible for validating the accuracy and reliability of the data for insurance companies domiciled in their state. We conducted tests for missing data, outliers, and consistency of trends in reporting and we found these data to be sufficiently reliable for the purposes of this report. Third, to determine the number of RRGs that states have chartered since 1981, we obtained data from NAIC that documented the incorporation and commencement of business dates for each RRG and identified the operating status of each RRG—for example, whether it was actively selling insurance or had voluntarily dissolved. Finally, to determine which business sectors RRGs were serving and the total amount of gross premiums written in each sector, we obtained information from a trade journal—the Risk Retention Reporter—because NAIC does not collect this information by business sector. We also requested that NAIC analyze their annual reporting data to calculate the “failure” rate for RRGs and compare it with that of traditional property and casualty insurance companies from 1987 through 2003. In response, NAIC calculated annual “failure” rates for each type of insurer, comparing the number of insurers that “failed” each year with the total number of active insurers that year. The analysis began with calendar year 1987 because it was the first full year following the passage of LRRA. NAIC classified an insurance company as having failed if a state regulator reported to NAIC that the state had placed the insurer in a receivership for the purpose of conserving, rehabilitating, or liquidating the insurance company. Since NAIC officials classified an insurance company subject to any one of these actions as having failed, the failure date for each insurance company reflects the date on which a state first took regulatory action. We independently verified the status of each RRG that NAIC classified as failed by cross-checking the current status of each RRG with information from two additional sources— state insurance departments’ responses to our survey, with follow-up interviews as necessary, and the Risk Retention Group Directory and Guide. To determine if the differences in annual failure rates of the RRGs and traditional companies were statistically significant, NAIC performed a paired T-test. They concluded that the average annual RRG failure rates were higher than those for traditional property and casualty insurers. We also obtained a similar statistically significant result when testing for the difference across the 18-year period for RRGs and traditional insurers active in a given year. We recognize that, although these tests indicated statistically significantly different failure rates, the comparison between these insurer groups is less than optimal because the comparison group included all property and casualty insurers, which do not constitute a true “peer group” for RRGs. First, RRGs are only permitted to write commercial liability insurance, but NAIC estimated that only 34 percent of insurers exclusively wrote liability insurance. Further, NAIC’s peer group included traditional insurers writing both commercial and personal insurance. Second, we noted that the paired T-test comparison is more sensitive to any single RRG failing than any failure of a traditional insurer because of the relatively small number of RRGs. Finally, most RRGs are substantially smaller in size (that is, in terms of premiums written) than many insurance companies and may have different characteristics than larger insurance companies. Given the data available to NAIC, it would have been a difficult and time-consuming task to individually identify and separate those property and casualty insurers with similar profiles for comparison with RRGs. In choosing which regulators to interview, we first selected regulators from the six states that had domiciled the highest number of active RRGs as of June 30, 2004, including two with extensive regulatory experience and four new to chartering RRGs. The six leading domiciliary states were Arizona, the District of Columbia, Hawaii, Nevada, South Carolina, and Vermont. Second, we selected regulators from eight additional states, including four that had domiciled just a few RRGs and four that had domiciled no RRGs. For states that had domiciled just a few or no RRGs, we identified and selected those where RRGs, as of the end of 2003, were selling some of the highest amounts of insurance. Finally, we also considered geographic dispersion in selecting states across the United States. In total, we selected 14 regulators (see table 1 for additional information). To determine if any significant regulatory problems have resulted from LRRA’s partial preemption of state insurance laws, as part of our survey we asked regulators to evaluate the adequacy of LRRA’s protections, describe how they reviewed RRG financial reports, and report whether their state had ever asked a domciliary state to conduct an examination. To obtain an in-depth understanding of how state regulators viewed the adequacy of LRRA’s regulatory protections and identify specific problems, if any, we interviewed regulators from each of our selected 14 states. We made visits to the insurance departments of five of the six leading domiciliary states— Arizona, the District of Columbia, Hawaii, South Carolina, and Vermont— and five additional states—Nebraska, New York, Tennessee, Texas, and Virginia. To assess the regulatory framework for regulating RRGs in the six leading domiciliary states (the five we visited plus Nevada), we also reviewed state statutes and obtained from regulators detailed descriptions of their departments’ practices for chartering and regulating RRGs. To determine how the RRG regulatory framework created in these states compared with that of traditional insurers, we identified key components of NAIC’s accreditation program for traditional insurance companies, based on documentation provided by NAIC and our past reports. Finally, our survey also included questions about RRGs consisting of businesses that issued vehicle service contracts (VSC) to consumers because this type of arrangement is associated with two failed RRGs. In reviewing how RRGs file financial reports, we assessed how the use or modification of two sets of accounting standards, generally accepted accounting principles (GAAP) and statutory accounting principles (SAP), could affect the ability of NAIC and regulators to analyze the reports. For the year 2003, we also obtained from NAIC the names of RRGs that used GAAP to file their financial reports and those that used SAP. To obtain an understanding of differences between these accounting principles, we obtained documentation from NAIC that identified key differences and specific examples of how each could affect an RRG’s balance sheet. We relied on NAIC for explanations of SAP because NAIC sets standards for the use of this accounting method as part of its accreditation program. The purpose of the accreditation program is to make monitoring and regulating the solvency of multistate insurance companies more effective by ensuring that states adhere to basic recommended practices for an effective state regulatory department. Specifically, NAIC developed the Accounting Practices and Procedures Manual, a comprehensive guide on SAP, for insurance departments, insurers, and auditors to use. To better understand GAAP and its requirements, we reviewed concept statements from the Financial Accounting Standards Board (FASB), which is the designated private-sector organization that establishes standards for financial accounting and reporting. We also consulted with our accounting experts to better understand how GAAP affected the presentation of financial results. To determine if LRRA’s ownership, control, and governance requirements adequately protect the interests of RRG insureds, we analyzed the statute to identify provisions relevant to these issues. In addition, we reviewed the insurance statutes of the six leading domiciliary states—Arizona, the District of Columbia, Hawaii, Nevada, South Carolina, and Vermont— related to the chartering of RRGs to determine if those states imposed any statutory requirements on RRGs with respect to ownership, control, or governance of RRGs. To identify additional expectations that state insurance departments might have set for the ownership, control, or governance of RRGs, we interviewed regulators from the six leading domiciliary states and reviewed the chartering documents, such as articles of incorporation and bylaws, of RRGs recently chartered in five of those states. One state insurance department, South Carolina, would not provide us access to these documents although we were able to obtain articles of incorporation from the Office of the South Carolina Secretary of State. In addition, we looked at past failures (and the public documentation that accompanies failures) to assess whether factors related to the ownership, control, and governance of RRGs played a role, or were alleged to have played a role, in the failures, particularly with respect to inherent conflicts of interest between the RRG and its management company or managers. To identify these factors, we first selected 16 of the 22 failures to review, choosing the more recent failures from a variety of states. As available for each failure, we reviewed relevant documentation, such as examination reports, liquidation petitions and orders, court filings (for example, judgments, if relevant), and interviewed knowledgeable state officials. Because some of the failures were more than 5 years old, the amount of information we could collect about a few of the failures was more limited than for others. In the case of National Warranty RRG, we reviewed publicly available information as supplied by the liquidator of National Warranty on its Web site; we also interviewed insurance regulators in Nebraska where National Warranty’s offices were located and reviewed court documents. We used these alternative methods of obtaining information because National Warranty RRG’s liquidators would not supply us any additional information. To determine how frequently RRGs include the lack of guaranty fund disclosure on their Web sites and if they use the words “risk retention group” in their name, we searched the Internet to identify how many RRGs had Web sites as of August 2004, based on a listing of 160 RRGs NAIC identified as active as of the beginning of June 2004. When we identified Web sites, we noted whether the words “risk retention group” or the acronym “RRG” appeared in the RRG’s name and reviewed the entire site for the lack of guaranty fund disclosure. We updated the results of our initial search in May 2005, using the original group of RRGs. The Liability Risk Retention Act of 1986 permits risk retention groups (RRGs) to offer most types of commercial liability insurance. RRGs are unique because unlike most insurance companies, they are regulated only by the state that chartered them and are largely exempt from the oversight of the states in which they operate. At the request of Chairman Michael G. Oxley, Chairman of House Financial Services, we are conducting a review of RRGs to determine how they have met the insurance needs of businesses and whether the exemption of RRGs from most state regulations, other than those of the state in which they are domiciled, has resulted in any regulatory concerns. We believe that you can make an important contribution to this study, and ask that you respond to this survey so we can provide the most complete information about RRGs to Congress. The survey should take about 90 minutes to complete, although additional time may be required if your state has chartered several RRGs. Please note that attached to the e-mail that transmitted this survey is a file that identifies all the RRGs operating in your state, as of year-end 2003. You will need to review this list in order to answer questions 23 and 24. As indicated in the survey, we would like you to provide us information about insurance statutes and regulations that apply to RRGs chartered in your state. For the regulations only, we are requesting that you provide a hyperlink to the regulation, but you are also welcome to send us a copy of the regulation attached to the e-mail message that contains your completed survey instrument. Please complete the survey in MS-Word and return it via e-mail to GAOrrgSurvey@gao.gov -- no later than July 23, 2004. If you encounter any technical difficulties please contact: William R. Chatlos Phone: (202) 512-7607 e-mail: chatlosw@gao.gov NOTE: The number of states responding to an item is generally printed left of the response. No responses are provided in this Appendix when the answers are too diverse to summarize or present briefly. Definitions of acronyms and terms used in this questionnaire Risk Retention Group (RRG): An RRG is a group of members with similar risks that join to create an insurance company to self-insure their risks. The Liability Risk Retention Act permits RRGs to provide commercial liability insurance and largely exempts them from regulatory oversight other than that performed by their chartering state. The state that charters an RRG and is responsible for performing regulatory oversight, including examinations. (“State” includes the District of Columbia, and for RRGs chartered before 1985, Bermuda and the Cayman Islands.) Any state in which an RRG operates but is not chartered. A vehicle service contract, purchased by consumers when they buy cars, is for maintaining and repairing an automobile beyond its manufacturer's warranty coverage. 1. Please use your mouse to navigate throughout the survey by clicking on the field you wish to answer or filling in the requested field . 2. To select a check box, simply click or double click on the center of the box. 3. To change or deselect a response, simply click on the check box and the ‘X’ should disappear. 4. Consult with others as needed to complete any section. 5. After each section, there is a place for you to make comments. 2. Please provide the name, title, phone number, and email address of the person completing the survey so we might contact them if there are questions. Part II: Requirements for Domiciling RRGs in Your State In Part II (Questions 3-12), we are asking about the role of your state as a “domiciliary” state for RRGs, the state responsible for chartering and regulating an RRG. This information is important, even if your state has not actually chartered an RRG, because the laws and regulations for domiciling RRGs vary from state to state. In contrast, Section III contains questions pertaining to RRGs operating but not domiciled in your state. In addition, in some states RRGs can be chartered under more than one set of laws. When responding to the questions below, please respond for each law under which a RRG could be chartered, even if a RRG or you the regulator would prefer a RRG be chartered under one law (e.g., a captive law) rather than another (e.g., traditional insurance company law). 3. In your state, under which of the following domiciliary laws and/or regulations would RRGs be chartered, if domiciled and chartered in your state? (Check all that apply.) Traditional insurance law and/or regulations Captive law and/or regulations; with or without specific provisions for RRGs Other (Specify): 4. If your state has a captive law and/or regulations, please provide the following information: Not applicable, we do not use a captive law and/or regulations (Skip to Question #5.) a. Year captive law enacted: b. Year regulations created: c. Citation for Statute: d. Citation for regulations (Please include website link below.) e. Does this law/regulation permit RRGs to be chartered as a captive?. 5. If your state uses its traditional insurance law to charter RRGs, please provide the following Not applicable, we do not use traditional laws and/or regulations to charter RRGs. (Skip to Question #6.) a. Citation for statute under which RRGs are chartered: b. Citation for regulations (Please include website link below.) website link: 6. In your state, what are the minimum initial capital and/or surplus requirements for a RRG domiciled in your state under any of the following laws/regulations that are applicable? (RRGs Cannot Be Chartered Regulations in this State) a. Traditional Insurance Laws/Regs c. Other (Specify): 7. In your state, under which of the following laws and/or regulations would RRGs domiciled in your state be required to comply with the National Association of Insurance Commissioners’ (NAIC) risk-based capital requirements? (RRGs Cannot Be this State) a. Traditional Insurance Laws/Regs c. Other (Specify): 8. In your state, under which of the following laws and/or regulations would RRGs domiciled in your state be required to submit the same financial information as traditional insurance companies to the NAIC? (RRGs Cannot Be this State) a. Traditional Insurance Laws/Regs c. Other (Specify): *If you indicated that your state does not require RRGs to file financial information with NAIC, please explain: 9. How many RRGs have ever domiciled in your state? Number of domiciled RRGs: (If this number is greater than “0”, please complete Appendix A.) = 1 – 5 RRGs 10. Does your state have staff who are exclusively dedicated to overseeing matters related to captives and/or RRGs domiciled in your state? No (Skip to Q12.) 11. If you answered “Yes” to Question 10, about how many full-time-equivalent (FTE) staff are currently dedicated to working with captives and RRGs domiciled in your state? Part III: Your Role as a Host State Regulator For RRGs Operating in Your State But Domiciled in Another State In Part III (Questions 13–24), we are asking about the role of your state as a “host state” regulator. A host state is one in which RRGs operate but are not domiciled. The 1986 Liability Risk Retention Act limits the amount of oversight that “host state” regulators can perform over RRGs operating but not domiciled in their states. This section pertains only to requirements for RRGs operating but not domiciled in your state. A. Submission of Operational Plans or Feasibility Studies to Host State Regulators The Liability Risk Retention Act, 15 U.S.C. §3902, requires that each RRG submit to the insurance commissioner of each state in which it intends to do business, copy of a plan of operation or a feasibility study which includes the coverages, deductibles, coverage limits, rates and rating classification systems for each line of insurance the group intends to offer, and a copy of any revisions to such plan or study. 13. As of year-end 2003, how many RRGs were registered with your state to conduct business? 14. What steps, if any, does your state routinely take to ensure that RRGs submit plans of operation or feasibility studies? 16. Are you aware of any RRG that substantially changed its business plan but did not provide your state a copy of the altered plan? insurers (i.e., surplus lines insurers)? Please briefly describe the review that you conduct: 18. If you have any comments related to this section, please write them here: B. Submission of Annual Financial Statements to Host State Regulators The Risk Retention Act, 15 U.S.C. §3902, requires that each RRG submit to the insurance commissioner of each State in which it is doing business, a copy of the group’s annual financial statement submitted to the State in which the group is chartered as an insurance company, which statement shall be certified by an independent public accountant and contain a statement of opinion on loss and loss adjustment expense reserves made by a member of the American Academy of Actuaries, or a qualified loss reserve specialist. 19. What steps, if any, does your state routinely take to ensure that RRGs operating but not domiciled in your state submit annual financial statements? 20. To the best of your knowledge, has any RRG operating in your state ever failed to provide copies of its annual financial statement to your state insurance department? insurers (i.e., surplus line insurers)? INSTRUCTIONS: The e-mail inviting you to respond to this survey contained two attachments: (1) the survey itself, including Appendix A and (2) Appendix B--a list of all RRGs that reported financial data to NAIC for 2003. The RRGs appearing first in Appendix B identified themselves to NAIC as writing premiums in your state during 2003 and those appearing second did not. To respond to the questions below, please compare the list of RRGs in Appendix B that reported to NAIC that they wrote premiums in your state in 2003 with your internal records. 23. To the best of your knowledge, were all of the RRGs reporting to NAIC that they wrote premiums in your state in 2003, also registered to conduct business in your state in 2003? No -- If “No”, please indicate: The names of RRGs that wrote premiums in your state but were not registered in your state: Not sure -- Some RRGs may have written premiums in our state without first registering with our department but we are double-checking NAIC’s information directly with the RRGs in question. (Request: Please let GAO know if you identify RRGs that operated but were not registered in your state in 2003, even if you have already submitted your survey.) 24. Based on a review of your internal records, did you identify any RRGs that wrote premiums in your state in 2003 but were not listed on NAIC’s list? Part IV: Regulatory Experiences and Opinions on the Risk Retention Act INSTRUCTIONS: If your state has never chartered RRGs, go to Question 29. Otherwise, please begin with Question 25. 25. During the past 24 months, about how many states that host RRGs have contacted your state to seek information about RRGs domiciled in your state? Number of States contacting you: = 0 contacts 26. To the best of your knowledge, has your state ever been asked by a host state regulator to conduct an examination of an RRG domiciled in your state? No (Skip to Q29) No (Please explain why you did not comply): 29. During the past 24 months, have concerns about an RRG led your department to contact the RRG’s domiciliary state regulators? If yes, how many different RRGs have you called about (excluding calls about National Warranty RRG)? 30. To the best of your knowledge, as a host state for RRGs, has your state ever asked a domiciliary state to conduct an examination? 31. Please identify each request your state has made, including the name of the RRG, the date of request, the domiciliary state to which you made the request, and a brief explanation of the circumstances. 32. Did the domiciliary state comply with your state’s request(s)? 33. Do you have any additional comments on questions 25 through 32? 34. For the record, does your State believe RRGs have expanded the availability and affordability of commercial liability insurance for groups that would otherwise have had difficulty in obtaining coverage? Please offer any comments on your response: 35. For the record, what is your State’s opinion as to whether the Risk Retention Act should be expanded to permit RRGs to provide property insurance? Please offer any comments on your response: 36. In your opinion, how adequate or inadequate are the regulatory protections or safeguards built into the Risk Retention Act? (Check one.) Please offer any comments on your response: 37. Does your state have an opinion as to whether the Risk Retention Act should be clarified or amended in any way? (Continue on next page.) Part IV (Questions 38-48) is about vehicle service contracts (VSCs). An increasing number of risk retention groups have been established to insure VSC obligors. These obligors—whether auto dealers or third party administrators—issue VSCs to consumers. Because of this trend, and the recent failure of National Warranty RRG, we are seeking a limited amount of information on how states regulate insurance companies that insure obligors who issue VSCs. INSTRUCTIONS: Please complete Questions 38 to 47. If VSCs are regulated in another office, please ask for assistance. If someone other than the person identified in question 2 answered these questions, please provide the appropriate contact information. 38. Please provide the name, title, office and phone number of the person who completed this part of the survey unless the name is the same as shown in question 2: 39. Are vehicle service contracts (VSCs) regulated as insurance in your state? Yes, but under certain conditions (Specify conditions): Citation for regulation: website link: 40. Has your state adopted the NAIC Service Contracts model law? Somewhat – please explain: 41. Does your state permit third-party administrators, rather then just auto dealers, to issue VSCs? 42. Do any of your state agencies license obligors—whether auto dealers or third-party administrators—before obligors can issue vehicle service contracts in your state? 43. Which of the following requirements, if any, does your state require of obligors before they issue VSCs in your state? (Check all that apply.) Insure VSCs under a reimbursement or other insurance policy Maintain a funded reserve account for its obligations Place in trust with the commissioner a financial security deposit (e.g., a surety bond) Maintain a net worth of $100 million or another amount: (If checked, identify amount: ) 44. If obligors in your state purchase insurance for their VSCs, does your state require that in the event the obligor fails to perform, the insurer issuing the policy must either pay on behalf of the obligor any sums the obligor is legally obligated to pay, or provide any service which the obligor is legally obligated to provide? No (Skip to Question 46) Yes, but under certain conditions 45. If you answered “yes” to question 44, does that mean that the insurer is required to pay 100 percent of the loss (i.e., first dollar coverage) or does the insurer’s risk not attach until some deductible amount is met, such as a loss in excess of the obligor’s reserves? Yes, but under certain conditions Please explain: 46. Does your state require that in the event an obligor fails to pay or provide service on a VSC claim within a certain number of days (e.g., 60) after proof of loss has been filed, the contract holder is entitled to make a claim directly against the insurance company? Yes, but under certain conditions 47. Are VSCs covered by your guarantee fund? Have you completed Appendix A and/or Appendix B? (Please save this document as an MSWord document, then attach it to an email and send it to GAOrrgSurvey@gao.gov) Thank you for your assistance. Appendix A: Identification of RRGs Domiciled in Your State For each RRG that your state has chartered since 1981, please provide the following information: (Duplicate the table as many times as needed to complete for all RRG domiciled in your state.) NAIC Status Code (See below) On an annual basis, traditional insurance companies, as well as risk retention groups (RRG), file various financial data, such as financial statements and actuarial opinions, with their respective state regulatory agencies and the National Association of Insurance Commissioners (NAIC). More specifically, RRGs—although subject to the regulation of one state (their domiciliary state)—can and do sell insurance in multiple states and are required to provide their financial statements to each state in which they sell insurance. Unless exempted by the state of domicile, RRGs generally file their financial statements with NAIC as well. Additionally, although insurance companies generally are required to file their financial statements based on statutory accounting principles (SAP), captive insurance companies (a category that in many states includes RRGs) are generally permitted, and in some cases required, to use generally accepted accounting principles (GAAP), the accounting and reporting principles generally used by private-sector (nongovernmental) entities. Thus, while some RRGs report their financial information using SAP, others report using GAAP or variations of GAAP and SAP. However, the use or modification of two different sets of accounting principles can lead to different interpretations of an RRG’s financial condition. For example, differences in the GAAP or SAP treatment of assets and acquisition costs can significantly change the reported levels of total assets, capital, and surplus. Because regulators, particularly those in nondomiciliary states, predicate their review and analysis of insurance companies’ financial statements on SAP reporting, the differing accounting methods that RRGs may use could complicate analyses of their financial condition. For instance, based on whatever accounting basis is filed with them, the different levels of surplus reported under GAAP, or SAP, or modifications of each, can change radically the ratios NAIC uses to analyze the financial condition of insurers—undercutting the usefulness of the analyses. Similarly, the accounting differences also affect calculations for NAIC’s risk-based capital standards and may produce significantly different results. For example, an RRG could appear to have maintained capital adequacy under GAAP but would require regulatory action or control if the calculations were based on SAP. Differences in the two sets of accounting principles reflect the different purposes for which each was developed and may produce different financial pictures of the same entity. GAAP (for nongovernmental entities) provides guidance that businesses follow in preparing their general purpose financial statements, which provide users such as investors and creditors with a variety of useful information for assessing a business’s financial performance. GAAP stresses measurement of a business’s earnings from period to period and the matching of revenue and expenses to the periods in which they are incurred. In addition, these financial statements provide information to help investors, creditors, and others to assess the amounts, timing, and uncertainty of future earnings from the business. SAP is designed to meet the needs of insurance regulators, who are the primary users of insurers’ financial statements, and stresses the measurement of an insurer’s ability to pay claims—to protect policyholders from an insurer becoming insolvent (that is, not having sufficient financial resources to pay claims). Additionally, while RRGs may be permitted to report their financial condition using either GAAP or SAP, some regulators permit RRGs to report using nonstandard variants of both sets of accounting principles—to which we refer as modified GAAP and modified SAP. The use of variants further constrains the ability of NAIC and nondomiciliary state analysts to (1) understand the financial condition of the RRGs selling insurance to citizens of their state and (2) compare the financial condition of RRGs with that of traditional insurers writing similar lines of insurance. In some cases, RRGs are permitted to count letters of credit (LOC) as assets as a matter of permitted practice under modified versions of GAAP and SAP, although neither accounting method traditionally permits this practice. Further, regulators in some states have allowed RRGs filing under GAAP to modify their financial statements and count surplus notes as assets and add to surplus, another practice which GAAP typically does not allow. According to NAIC, the key differences between GAAP and SAP as they relate to financial reporting of RRGs are the treatment of acquisition costs and assets, differences that affect the total amount of surplus an RRG reports on the balance sheet. This is important because surplus represents the amount of assets over and above liabilities available for an insurer to meet future obligations to its policyholders. Consequently, the interpretation of an RRG’s financial condition can vary based on the set of accounting principles used to produce the RRG’s balance sheet. According to NAIC, GAAP and SAP differ most in their treatment of acquisition costs, which represent expenditures associated with selling insurance such as the commissions, state premium taxes, underwriting, and issuance costs that an insurer pays to acquire business. Under GAAP, firms defer and capitalize these costs as an asset on the balance sheet, then report them as expenses over the life of the insurance policies. This accounting treatment seeks to match the expenses incurred with the related income from policy premiums that will be received over time. Under SAP, firms “expense” all acquisition costs in the year they are incurred because these expenses do not represent assets that are available to pay future policyholder obligations. As illustrated in figure 9, the different accounting treatments of acquisition costs have a direct impact on the firm’s balance sheet. Under GAAP, a firm would defer acquisition costs and have a higher level of assets, capital, and surplus than that same firm would have if reporting under SAP. Under SAP, these acquisition costs would be fully charged in the period in which they are incurred, thereby reducing assets, capital, and surplus. GAAP and SAP also treat some assets differently. Under GAAP, assets are generally a firm’s property, both tangible and intangible, and claims against others that may be applied to cover the firm’s liabilities. SAP uses a more restrictive definition of assets, focusing only on assets that are available to pay current and future policyholder obligations—key information for regulators. As a result, some assets that are included on a GAAP balance sheet are excluded or “nonadmitted” under SAP. Examples of nonadmitted assets include equipment, furniture, supplies, prepaid expenses (such as prepayments on maintenance agreements), and trade names or other intangibles. Some RRGs also modify GAAP to count undrawn LOCs as assets. More specifically, the six leading domiciliary states for RRGs—Arizona, the District of Columbia, Hawaii, Nevada, South Carolina, and Vermont—allow RRGs to count undrawn LOCs as assets, thus increasing their reported assets, capital, and surplus, even though undrawn LOCs are not recognized as an asset under GAAP or SAP. For example, in 2002–2003, state regulators permitted about one-third of RRGs actively writing insurance to count undrawn LOCs as assets and supplement their reported capital. Figure 10 illustrates the impact of different asset treatments for undrawn LOCs. In this example, the RRG had a $1.5 million LOC that was counted as an asset under a modified version of GAAP but was not counted as an asset under a traditional use of SAP. In addition, the RRG treated $363,750 in prepaid expenses as an asset, which it would not be able to do under SAP. Under a modified version of GAAP, the RRG’s total assets would be $17,914,359 instead of $16,050,609 under a traditional use of SAP, a difference of $1,863,750. Figure 11 illustrates different treatments of acquisition costs and assets, using a modified version of GAAP and a traditional version of SAP. In this example, under a modified version of GAAP, undrawn LOCs ($2.2 million), acquisition costs ($361,238), and prepaid expenses ($15,724) are valued as an additional $2,576,962 in assets with a corresponding increase in capital and surplus. The overall impact of treating each of these items as assets under a modified version of GAAP is significant because the RRG reported a total of $2,603,656 in capital and surplus, whereas it would report only $26,694 under a traditional use of SAP. Under traditional GAAP, capital and surplus would be reported as $403,656 ($2,603,656 minus the $2,200,000 undrawn LOC). Additionally, the two accounting principles treat surplus notes differently. Although SAP restricts certain assets, it permits (with regulatory approval) the admission of surplus notes as a separate component of statutory surplus, which GAAP does not. When an insurance company issues a surplus note, it is in effect making a promise to repay a loan, but one that the lender has agreed cannot be repaid without regulatory approval. Both SAP and GAAP recognize the proceeds of the loan as an asset to the extent they have been borrowed but not expended (are still available). However, since the insurer cannot repay the debt without approval, the regulator knows that the proceeds of the loan are available to pay claims, if necessary. Thus, under SAP, with its emphasis on the ability of an insurer to pay claims, the proceeds are added to capital and surplus rather than recognizing a corresponding liability to repay the debt. GAAP, on the other hand, requires companies issuing surplus notes to recognize a liability for the proceeds of the loan, rather than adding to capital and surplus since the insurer still has to repay the debt. However, according to NAIC data, four state regulators have allowed RRGs to modify GAAP and report surplus notes as part of capital and surplus during either 2002 or 2003. A total of 10 RRGs between the four states modified GAAP in this manner and were able to increase their reported level of capital and surplus. Finally, in addition to the differences between GAAP and SAP already discussed, and as they have been modified by RRGs, other differences between the two accounting methods include the treatment of investments, goodwill (for example, an intangible asset such as a company’s reputation), and deferred income taxes. According to NAIC, while these differences may affect a company’s financial statement, they generally do not have as great an impact as the differences in the treatment of acquisition costs and assets. Use or modification of GAAP and the modification of SAP can also affect the ability of NAIC and regulators to evaluate the financial condition of some RRGs. Although subject to the regulation of one state (their domiciliary state), RRGs can and do sell insurance in multiple states and are required to provide financial statements to each state in which they sell insurance. In almost all cases, RRGs also provide financial statements to NAIC for analysis and review. NAIC uses financial ratios and risk-based capital standards to evaluate the financial condition of insurance companies and provides this information to state regulators in an effort to help them better target their regulatory efforts. NAIC calculates the ratios using the data from the financial statements as they are filed by the companies. However, since both the formulas and the benchmarks for the financial ratios are based on SAP, the ratio information may not be meaningful to NAIC or the state regulators if the benchmarks are compared with the ratios derived from financial information based on a standard or modified version of GAAP, or a modified version of SAP. Further, the use of GAAP, modified GAAP, or modified SAP could make risk-based capital standards less meaningful because these standards also are based on SAP. (We discuss accounting differences in relation to risk-based capital standards in more detail at the end of this appendix.) To illustrate how the use of two different accounting methods can impede an assessment of an RRG’s financial condition, we selected two financial ratios that NAIC commonly uses to analyze the financial condition of insurers—net premiums written to policyholders’ surplus (NPW:PS) and reserves to policyholders’ surplus. Using SAP, NAIC has established a “usual range” or benchmark for these financial indicators from studies of the ratios for companies that became insolvent or experienced financial difficulties in recent years. As part of its review process, NAIC compares insurers’ ratios with these benchmarks. We selected these two ratios because of the emphasis regulators place on insurance companies having an adequate amount of surplus to meet claims and because policyholders’ surplus is affected by the different accounting treatments used by RRGs. The NPW:PS ratio is one of the 12 ratios in NAIC’s Insurance Regulatory Information System (IRIS) and measures the adequacy of a company’s ability to pay unanticipated future claims on that portion of its risk that it has not reinsured. The higher the NPW:PS ratio, which is typically expressed as a percentage, the more risk a company bears in relation to the policyholders’ surplus available to absorb unanticipated claims. In other words, the higher the NPW:PS, the more likely an insurance company could experience difficulty paying unanticipated claims. Since surplus, as reflected by the availability of assets to pay claims, is a key component of the ratio, the use of GAAP, modified GAAP, or modified SAP instead of SAP may affect the results substantially. As shown in figure 12, each of the three RRGs has a lower NPW:PS ratio when the ratio is calculated using balance sheet information based on a modified version of GAAP than when the same ratio is based on SAP. In other words, under modified GAAP, each of these three RRGs would appear to have a greater capability to pay unanticipated claims than under SAP. However, one RRG (RRG from figure 9) is below the NAIC benchmark regardless of which accounting method is used. Some of the higher NPW:PS ratios under SAP could provide a basis for regulatory concern. NAIC considers NPW:PS ratios of 300 percent or less as “acceptable” or “usual.” However, according to NAIC staff, companies that primarily provide liability insurance generally should maintain lower NPW:PS ratios than insurers with other lines of business because estimating potential losses for liability insurance is more difficult than estimating potential losses for other types of insurance. Since RRGs only can provide liability insurance, NAIC staff believe a value above 200 percent (in conjunction with other factors) could warrant further regulatory attention. Using this lower benchmark, two RRGs (from figures 9 and 11) meet the benchmark criteria under modified GAAP, but all three RRGs fail to meet the benchmark under SAP. Thus, an analysis of an RRG’s financial condition as reported under modified GAAP could be misleading, particularly when compared with other insurers that report under SAP. The reserves to policyholders’ surplus ratio is one of NAIC’s Financial Analysis Solvency Tools ratios and represents a company’s loss and loss adjustment expense reserves in relation to policyholders’ surplus. This ratio, which is typically expressed as a percentage, provides a measure of how much risk each dollar of surplus supports and an insurer’s ability to pay claims, because if reserves were inadequate, the insurer would have to pay claims from surplus. The higher the ratio, the more an insurer’s ability to pay claims is dependent upon having and maintaining reserve adequacy. Again, surplus is a key component of the ratio and the use of GAAP, modified GAAP, or modified SAP rather than SAP could affect the ratio. As shown in figure 13, each of the three RRGs has higher reserves to policyholders’ surplus ratios when the calculations are derived from balance sheet numbers based on SAP rather than modified GAAP. Under the modified version of GAAP, each of the three RRGs reports higher levels of surplus and consequently less risk being supported by each dollar of surplus (a lower ratio) compared with SAP. Higher reserves to policyholders’ surplus ratios could provide a basis for regulatory concerns. According to NAIC, ratios of 200 percent or less are considered “acceptable” or “usual” for RRGs. However, although the RRG from figure 11 meets NAIC’s benchmark under modified GAAP, it significantly exceeds NAIC’s benchmark when the ratio is calculated based on SAP—a condition that could warrant further regulatory attention. NAIC applies risk-based capital standards to insurers in order to measure their capital adequacy relative to their risks. Monitoring capital levels with other financial analyses helps regulators identify financial weaknesses. However, since risk-based capital standards are based on SAP, numbers used to calculate capital adequacy that are derived from any other accounting basis (GAAP, modified GAAP, or modified SAP) could distort the application of the standards and make resulting assessments less meaningful. NAIC uses a formula that incorporates various risks to calculate an “authorized control level” of capital, which is used as a point of reference. The authorized control level is essentially the point at which a state insurance commissioner has legal grounds to rehabilitate (that is, assume control of the company and its assets and administer it with the goal of reforming and revitalizing it) or liquidate the company to avoid insolvency. NAIC establishes four levels of company and regulatory action that depend on a company’s total adjusted capital (TAC) in relation to its authorized control level, with more severe action required as TAC decreases. They are Company action level. If an insurer’s TAC falls below the company action level, which is 200 percent of the authorized control level, the insurer must file a plan with the insurance commissioner that explains its financial condition and how it proposes to correct the capital deficiency. Regulatory action level. If an insurer’s TAC falls below the regulatory action level, which is 150 percent of its authorized control level, the insurance commissioner must examine the insurer and, if necessary, institute corrective action. Authorized control level. If an insurer’s TAC falls below its authorized control level, the insurance commissioner has the legal grounds to rehabilitate or liquidate the company. Mandatory control level. If an insurer’s TAC falls below the mandatory control level, which is 70 percent of its authorized control level, the insurance commissioner must seize the company. Because the differences between GAAP and SAP, as well as the modification of both accounting bases, affect an RRG’s capital, the differences also affect the TAC calculation for an RRG, and when compared to the control levels, could lead an analyst to draw different conclusions about the level of regulatory intervention needed. For example, in table 2, we place the three RRGs that we have been using as examples in the action categories that would result from calculating each TAC under the two accounting methods or their variants. The accounting methods used have no effect in terms of regulator action for the first RRG (because the RRG maintained a TAC level of more than 200 percent of the authorized control level). The other two RRGs change to categories that require more severe actions. Open liquidations (estimates) As of 2003, the liquidators reported that losses could reach about $74 million. The liquidators have not updated their loss estimate since 2003. As of May 2005, timely claims against the RRG numbered 1,990 but it is not known what percentage of approved claims will be paid. As of May 2005, timely claims against the RRG numbered 2,420 but it is not known what percentage of approved claims will be paid. As of May 2005, timely claims against the RRG numbered 2,150, but it is not known what percentage of approved claims will be paid. As of July 2005, the liquidation was expected to be closed within a few months. No claims have been paid yet for the unauthorized insurance. Open liquidations (estimates) As of May 2005, the receivership estimated that the RRG had about 350 outstanding claims, valued at about $6 million. The receiver expected to pay claims at 50 cents on the dollar. As of July 2005, the overall estimated loss was undetermined. Claims are being paid at 86 cents on the dollar. The RRG’s business was assumed by another insurance company, pursuant to an approved plan of rehabilitation. As of July 2005, the overall loss estimate was about $1.5 million. Claims have been paid at 82.5 cents on the dollar. As of April 2005, the overall loss was estimated at $4.2 million with about 260 claims filed. Distribution to date is 32 cents on the dollar and may increase to 42 cents on the dollar. Open (but expected to close soon) As of June 2005, claims were expected to be paid at 63 cents on the dollar. Open liquidations (estimates) Claims paid in full. As of June 2005, the claims were expected to be paid in full. The claims have been paid in full, and the receivership is expected to close in a few years. Claims paid in full. The overall loss was about $6 million, and claims were paid at 75 cents on the dollar. The loss was estimated at $5 million, and claims were paid at 65 cents on the dollar. As of July 2005, claims were being paid at 50 cents on the dollar and will pay an estimated additional 7 cents at closing. The overall estimated loss is about $27 million. Open liquidations (estimates) Payments have been made at 60 cents on the dollar, with a possible final distribution of 4 cents on the dollar. Liquidated claims have been paid in full, and money has been reserved to pay the estimated amount of unliquidated claims (as they become payable). The overall loss estimate is $945,000, and claims were paid at 61 cents on the dollar. According to a Tennessee official, as of December 31, 2004, TRA had approximately $17 million in assets and $61 million in liabilities in expected losses for policy claims. Richard J. Hillman, Director Financial Markets and Community Investment 441 G. Street N.W. RE: GAO Report on Risk Retention Groups The National Association of Insurance Commissioners (NAIC) appreciates this opportunity to review the GAO draft report on Risk Retention Groups. As you know, the NAIC is a voluntary organization of the chief insurance regulatory officials of the 50 states, the District of Columbia and five U.S. territories. The association’s overriding objective is to assist state insurance regulators in protecting consumers and helping maintain the financial stability of the insurance industry by offering financial, actuarial, legal, computer, research, market conduct and economic expertise. Formed in 1871, it is the oldest association of state officials. Several members of the NAIC staff and Director L. Tim Wagner in his capacity as chair of the NAIC’s Property and Casualty Insurance Committee reviewed the draft report and a consensus opinion among them was that the report was well thought out and well documented. The research methods employed were solid and the results obtained were carefully interpreted to obtain a clear picture of how states are undertaking their responsibilities with regard to regulation of risk retention groups. It explored the issues that are pertinent to the protection of risk retention group members and the third party claimants that are affected by the coverage provided by the risk retention groups. Overall, the reviewers believed that the report was materially accurate. The reviewers agree with the recommendations contained in the report for Congress and for insurance regulators. Attached to this letter are several editorial suggestions and clarifications that we believe would improve the final document. Thanks again for all your hard work in making government accountable to the public that it serves. Lawrence D. Cluff was the Assistant Director for this report. In addition, Sonja J. Bensen, James R. Black, William R. Chatlos, Tarek O. Mahmassani, Omyra M. Ramsingh, and Barbara M. Roesmann made key contributions to this report.
Congress authorized the creation of risk retention groups (RRG) to increase the availability and affordability of commercial liability insurance. An RRG is a group of similar businesses that creates its own insurance company to self-insure its risks. Through the Liability Risk Retention Act (LRRA), Congress partly preempted state insurance law to create a single-state regulatory framework for RRGs, although RRGs are multistate insurers. Recent shortages of affordable liability insurance have increased RRG formations, but recent failures of several large RRGs also raised questions about the adequacy of RRG regulation. This report (1) examines the effect of RRGs on insurance availability and affordability; (2) assesses whether LRRA's preemption has resulted in significant regulatory problems; and (3) evaluates the sufficiency of LRRA's ownership, control, and governance provisions in protecting the best interests of the RRG insureds. RRGs have had a small but important effect in increasing the availability and affordability of commercial liability insurance for certain groups. While RRGs have accounted for about $1.8 billion or about 1.17 percent of all commercial liability insurance in 2003, members have benefited from consistent prices, targeted coverage, and programs designed to reduce risk. A recent shortage of affordable liability insurance prompted the creation of many new RRGs. More RRGs formed in 2002-2004 than in the previous 15 years--and about three-quarters of the new RRGs offered medical malpractice coverage. LRRA's partial preemption of state insurance laws has resulted in a regulatory environment characterized by widely varying state standards. In part, state requirements differ because some states charter RRGs as "captive" insurance companies, which operate under fewer restrictions than traditional insurers. As a result, most RRGs have domiciled in six states that offer captive charters (including some states that have limited experience in regulating RRGs) rather than in the states where they conduct most of their business. Additionally, because most RRGs (as captives) are not subject to the same uniform, baseline standards for solvency regulation as traditional insurers, state requirements in important areas such as financial reporting also vary. For example, some regulators may have difficulty assessing the financial condition of RRGs operating in their state because not all RRGs use the same accounting principles. Further, some evidence exists to support regulator assertions that domiciliary states may be relaxing chartering or other requirements to attract RRGs. Because LRRA does not specify characteristics of ownership and control, or establish governance safeguards, RRGs can be operated in ways that do not consistently protect the best interests of their insureds. For example, LRRA does not explicitly require that the insureds contribute capital to the RRG or recognize that outside firms typically manage RRGs. Thus, some regulators believe that members without "skin in the game" will have less interest in the success and operation of their RRG and that RRGs would be chartered for purposes other than self-insurance, such as making profits for entrepreneurs who form and finance an RRG. LRRA also provides no governance protections to counteract potential conflicts of interest between insureds and management companies. In fact, factors contributing to many RRG failures suggest that sometimes management companies have promoted their own interests at the expense of the insureds. The combination of single-state regulation, growth in new domiciles, and wide variance in regulatory practices has increased the potential that RRGs would face greater solvency risks. As a result, GAO believes RRGs would benefit from uniform, baseline regulatory standards. Also, because many RRGs are run by management companies, they could benefit from corporate governance standards that would establish the insureds' authority over management.
Reducing the costs and time of its decision-making and improving its ability to deliver what is expected or promised have not been given adequate attention throughout the Forest Service. As a result, deficiencies within the decision-making process that have been known to the agency for a decade or more have not been corrected. To compensate for the increased costs and time of decision-making and the inability to implement planned projects, the Forest Service must request more annual appropriations to achieve fewer planning objectives. to provide complete, reliable, consistent, and timely financial information. However, the Forest Service has made little progress in implementing the act’s provisions. An audit of the agency’s financial statements for fiscal year 1995 by Agriculture’s Inspector General resulted in an adverse opinion because of “pervasive errors, material or potentially material misstatements, and/or departures from applicable Government accounting principles affecting several Financial Statement accounts.” Among the audit’s findings, the Inspector General reported that the Forest Service could not account for expenditures of $215 million in fiscal year 1995. As a result, Forest Service managers are unable to adequately monitor and control spending levels for various programs and activities relating to decision-making or to measure the extent to which changes affect costs and efficiency. Corrective actions to address accounting and financial reporting problems identified by the Inspector General are not scheduled to be implemented until the end of fiscal year 1998. Similarly, the Forest Service has not been successful in achieving the objectives in its forest plans or implementing planned projects. For example, in response to congressional concerns about the Forest Service not being able to deliver what is expected or promised, the Chief, in the fall of 1991, formed a task force of employees from throughout the agency to review the issue of accountability. The task force’s February 1994 report set forth a seven-step process to strengthen accountability. Steps in the process include (1) establishing work agreements that include measures and standards with customer involvement, (2) assessing performance, and (3) communicating results to customers. However, the task force’s recommendations were never implemented. Rather, they were identified as actions that the agency plans to implement over the next decade. The task force’s recommendations, as well as those in other studies, are intended to address some of the long-standing deficiencies within the Forest Service’s decision-making process that have driven up costs and time and/or driven down the ability to achieve planned objectives. These deficiencies include (1) not adequately monitoring the effects of past management decisions, (2) not maintaining a centralized system of comparable environmental and socioeconomic data, and (3) not adequately involving the public throughout the decision-making process. decisions, including their cumulative impact. Moreover, monitoring can be used as an effective tool when the effects of a decision may be difficult to determine in advance because of uncertainty or costs. However, the Forest Service (1) has historically given low priority to monitoring during the annual competition for scarce resources, (2) continues to approve projects without an adequate monitoring component, and (3) does not require its managers to report on the results of monitoring, as its current regulations require. Because of the inefficiencies in its decision-making process, the Forest Service must request more funds to accomplish fewer objectives during the yearly budget and appropriation process. For example, in fiscal year 1991, the Congress asked the Forest Service to develop a multiyear program to reduce the costs of its timber program by not less than 5 percent per year. The Forest Service responded to these and other concerns by undertaking three major cost-efficiency studies and is preparing to undertake a fourth. However, with no incentive to act, the agency has not implemented any of the recommended improvements agencywide. In the interim, the costs associated with preparing and administering timber sales have continued to rise. As a result, for fiscal year 1998, the agency is requesting $12 million, or 6 percent, more for timber sales management than was appropriated for fiscal year 1997 while proposing to offer 0.4 billion board feet, or 10 percent, less timber for sale. The Government Performance and Results Act of 1993 is designed to hold federal agencies more accountable for their performance by requiring them to establish performance goals, measures, and reports that provide a system of accountability for results. In addition, the Clinger-Cohen Act of 1996 (formerly entitled the Information Technology Management Reform Act of 1996) and the Paperwork Reduction Act of 1995 are intended to hold federal agencies more accountable for the adequacy of their information systems and data by providing that they shall establish goals, measure performance, and report on how well their information technology and data are supporting their mission-related programs. Although it is still too early to tell what impact these laws, together with the Chief Financial Officers Act, will have on the Forest Service, they provide a useful framework for strengthening accountability within the agency and improving the efficiency and effectiveness of its decision-making. Issues that transcend the agency’s administrative boundaries and jurisdiction also affect the efficiency and effectiveness of the Forest Service’s decision-making process. These issues include reconciling differences in the geographic areas that must be considered in reaching decisions under different planning and environmental laws. The Forest Service and other federal land management agencies are authorized to plan primarily along administrative boundaries, such as those defining national forests and parks. Conversely, environmental statutes and regulations require the agencies to analyze environmental issues and concerns along the boundaries of natural systems, such as watersheds and vegetative and animal communities. For example, regulations implementing the National Environmental Policy Act require the agencies to assess the cumulative impact of federal and nonfederal activities on the environment. Because the boundaries of administrative units and natural systems are frequently inconsistent, federal land management plans have often considered effects only on portions of natural systems or portions of the habitats of wide-ranging species, such as migratory birds, bears, and anadromous fish (including salmon). For example, the Interior Columbia River Basin contains 74 separate federal land units, including 35 national forests and 17 Bureau of Land Management districts, each with its own plan. Not analyzing effects on natural systems and their components at the appropriate ecological scale results in duplicative environmental analyses—in individual plans and projects—increasing the costs and time required for analysis and reducing the effectiveness of federal land management decision-making. Addressing issues that transcend the administrative boundaries and jurisdictions of the Forest Service and of other federal agencies will, at a minimum, require unparalleled coordination and cooperation among federal agencies. However, federal land management and regulatory agencies sometimes do not work efficiently and effectively together to address issues that transcend their boundaries and jurisdictions. Disagreements often stem from differing evaluations of environmental effects and risks, which in turn reflect the agencies’ disparate missions and responsibilities. federal agencies are often not comparable, large gaps in the information exist, and federal agencies lack awareness of who has what information. Over the past few years, several major studies have examined the need to reconcile the differences in the geographic areas that federal agencies must consider when reaching decisions. Among the options that have been suggested are changes to the Council on Environmental Quality’s regulations and guidance implementing the procedural provisions of the National Environmental Policy Act. According to Council officials, changes to the act’s regulations and guidance are not being considered at this time. Instead, the Council plans to rely primarily on less binding interagency agreements. However, since federal agencies sometimes do not work efficiently and effectively together to address issues that transcend their boundaries and jurisdictions and often lack the environmental and socioeconomic data required to make informed decisions, strong leadership by the Council would help to ensure that interagency agreements accomplish their intended objectives. Finally, differences in the requirements of numerous planning and environmental laws, enacted primarily during the 1960s and 1970s, produce inefficiency and ineffectiveness in the Forest Service’s decision-making. Differences among their requirements and differing judicial interpretations of their requirements have caused some issues to be analyzed or reanalyzed at various stages in the Forest Service’s decision-making process, as well as in the decision-making processes of other federal agencies, without their timely resolution, increasing the costs and time of decision-making and reducing the ability of the Forest Service and other land management agencies to achieve the objectives in their plans. the plan. The listing may also prohibit the agency from implementing projects under the plans that may affect the species until the new round of consultations has been completed. For example, recent federal court decisions required the Forest Service to reinitiate consultations on several approved forest plans after a species of salmon in the Pacific Northwest and a species of owl in the Southwest were listed as threatened under the Endangered Species Act. The courts’ rulings prohibited the agency from implementing projects under the plans that might affect the species until the new rounds of consultations with the Fish and Wildlife Service and/or the National Marine Fisheries Service had been completed. Additionally, through differing judicial interpretations of the same statutory requirements, the courts have established conflicting requirements. For example, three federal circuit courts of appeals have held that the approval of a forest plan represents a decision that can be judicially challenged and prohibited from being implemented. Conversely, two other federal circuit courts of appeals have held that a forest plan does not represent a decision and that only a project can be judicially challenged, at which time the adequacy of the plan’s treatment of larger-scale environmental issues arising in the project can be reconsidered. Requirements to consider new information and events, coupled with differing judicial interpretations of the same statutory requirements, have made it difficult for the Forest Service and other federal agencies to predict when any given decision can be considered final and can be implemented. Agency officials perceive that the same issues are recycled under different planning and environmental laws rather than resolved in a timely manner. In addition, environmental laws generally address individual resources, such as endangered and threatened species, water, and air. Conversely, planning statutes generally establish objectives for multiple resources, such as sustaining diverse plant and animal communities, securing favorable water flow conditions, and preserving wilderness. These different approaches to achieving similar environmental objectives have sometimes been difficult for the Forest Service and other federal agencies to reconcile, at least in the short term. For example, prescribed burning to restore the forests’ health and to sustain diverse plant and animal communities may be appropriate under the National Forest Management Act but may be difficult to reconcile in the short term with air quality standards under the Clean Air Act. In March 1995, the Secretary of Agriculture pledged to work with the Congress to identify statutory changes to improve the processes for implementing the Forest Service’s mission. However, neither his analysis nor options for changing the current statutory framework suggested by the Forest Service in 1995 have been sent to the Congress. Administration officials have said that they are hesitant to suggest changes to the procedural requirements of planning and environmental laws because they believe that the Congress may also make substantive changes to the laws with which they would disagree. On the basis of our work to date, we believe that statutory changes to improve the efficiency and effectiveness of the Forest Service’s decision-making process cannot be identified until after agreement is reached on which uses the agency is to emphasize under its broad multiple-use and sustained-yield mandate and how it is to resolve conflicts or make choices among competing uses on its lands. Our report to you and other requesters, to be issued this spring, will identify the increasing shift in emphasis in the Forest Service’s plans from producing timber to sustaining wildlife and fish. This shift is taking place in reaction to requirements in planning and environmental laws—reflecting changing public values and concerns—and their judicial interpretations, together with social, ecological, and other factors. In particular, section 7 of the Endangered Species Act represents a congressional design to give greater priority to the protection of endangered species than to the current primary missions of the Forest Service and other federal agencies. When proposing a project, the Forest Service bears the burden of proof to demonstrate that its actions will not likely jeopardize listed species. The increasing emphasis on sustaining wildlife and fish conflicts with the older emphasis on producing timber and underlies the Forest Service’s inability to achieve the goals and objectives for timber production set forth in many of the first forest plans. In addition, this attention to sustaining wildlife and fish will likely constrain future uses of the national forests, such as recreation. The demand for recreation is expected to grow and may increasingly conflict with both sustaining wildlife and fish and producing timber on Forest Service lands. While the agency continues to increase its emphasis on sustaining wildlife and fish, the Congress has never explicitly accepted this shift in emphasis or acknowledged its effects on the availability of other uses on national forests. Disagreement over the Forest Service’s priorities, both inside and outside the agency, has not only hampered efforts to improve the efficiency and effectiveness of its decision-making but also inhibited it in establishing the goals and performance measures needed to ensure its accountability. If agreement is to be reached on efforts to improve the Forest Service’s decision-making and if the agency is to be held accountable for its expenditures and performance, the Forest Service will need to consult with the Congress on its strategic long-term goals and desired outcome measures, as the Government Performance and Results Act requires. Such a consultation would create an opportunity for the Forest Service to gain a better understanding of which uses it is to emphasize under its broad multiple-use and sustained-yield mandate and how it is to resolve conflicts or make choices among competing uses on its lands. In summary, Mr. Chairman, the Forest Service’s decision-making process is broken and in need of repair. While much can be done within the current statutory framework to improve the efficiency and effectiveness of the process, strong leadership, both throughout the Forest Service and within the Council on Environmental Quality, will be required. Moreover, sustained oversight by the Congress will also be important. Differences among the requirements of planning and environmental laws also need to be addressed. However, at a June 1996 hearing at which both you and we testified, you stressed that “form must follow function” and that the immediate priority is to clarify the Forest Service’s functions. We agreed with you then, and we agree with you now. Clarifying priorities within the Forest Service’s multiple-use and sustained-yield mission should provide the agency with a better understanding of which uses it is to emphasize under its broad multiple-use and sustained-yield mandate and how it is to resolve conflicts or make choices among competing uses on its lands. Once this is done, the legislative changes that are needed to clarify or modify congressional intentions and expectations can be identified. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the preliminary results of its work on the decisionmaking process used by the Forest Service in carrying out its mission, focusing on the underlying causes of inefficiencies and ineffectiveness in the Forest Service's decisionmaking process. GAO noted that: (1) its ongoing work has identified three underlying causes of inefficiency and ineffectiveness in the Forest Service's decisionmaking process; (2) first, the agency has not given adequate attention to improving its decisionmaking process, including improving its accountability for expenditures and performance; (3) as a result, long-standing deficiencies within its decisionmaking process that have contributed to increased costs and time and/or the inability to achieve planned objectives have not been corrected; (4) second, issues that transcend the agency's administrative boundaries and jurisdiction have not been adequately addressed; (5) in particular, the Forest Service and other federal agencies have had difficulty reconciling the administrative boundaries of national forests, parks, and other federal land management units with the boundaries of natural systems, such as watersheds and vegetative and animal communities, both in planning and in assessing the cumulative impact of federal and nonfederal activities on the environment; (6) third, the requirements of numerous planning and environmental laws, enacted during the 1960s and 1970s, have not been harmonized; (7) as a result, differences among the requirements of different laws and their differing judicial interpretations require some issues to be analyzed or reanalyzed at different stages in the different decisionmaking processes of the Forest Service and other federal agencies without any clear sequence leading to their timely resolution; (8) additional differences among the statutory requirements for protecting resources, such as endangered and threatened species, water, air, diverse plant and animal communities, and wilderness, have also sometimes been difficult to reconcile; (9) however, on the basis of its work to date, GAO believes that statutory changes to improve the efficiency and effectiveness of the Forest Service's decisionmaking process cannot be identified until agreement is first reached on which uses the agency is to emphasize under its broad multiple-use and sustained-yield mandate and how it is to resolve conflicts or make choices among competing uses on its lands; and (10) disagreement over which uses should receive priority, both inside and outside the agency, has also inhibited the Forest Service in establishing the goals and performance measures needed to ensure its accountability.
In order to collect tax revenue that would otherwise go uncollected and resolve threats to the integrity and fairness of the tax system, IRS encourages the public to report possible federal tax noncompliance and fraud. IRS has a web page to help the public find resources for reporting both general and specific types of federal tax noncompliance. IRS’s information referral process is for reporting general types of tax noncompliance, including failure to file a tax return, report income, or pay taxes owed. IRS has other specialized processes for reporting specific types of tax noncompliance, such as identity theft and misconduct by a tax return preparer. In fiscal year 2015, IRS received over 87,000 information referrals, as shown in figure 1. During fiscal years 2013 and 2014, IRS revised the information referral instructions for the public to help clarify the other specific forms to use to report directly to IRS’s specialized referral programs, which IRS officials said has contributed to the reduced volume in information referrals. The public reports general tax noncompliance—by individuals, businesses, or both—by submitting Form 3949-A. The form provides checkboxes for identifying 16 types of noncompliance (including activities such as organized crime, narcotics income, and false/altered documents) and requests the dollar amounts and years of unreported income, among other things (see figure 2). The form instructions, which are reproduced in appendix II, explain the different checkboxes for reporting tax noncompliance. In completing the information referral form, a person is asked to provide as much information as possible identifying the name, address, and tax identification number of the taxpayer reported. Providing partial information does not mean IRS will reject the information outright, but insufficient identifying information may preclude IRS from making use of a referral. The person submitting an information referral is also asked to provide personal identifying information but may submit anonymously. Persons submitting an information referral are not entitled to any reward if the information is used and results in additional tax being collected. Once the form is completed, the person mails it to the IRS submission processing center in Fresno, California, where the form enters the information referral process. As summarized in table 1, multiple units across IRS play important roles in processing information referrals submitted by the public: The Image Control Team (ICT), within the submission processing center in Fresno, California, receives and initially screens paper information referrals received by mail for routing to other IRS units for further action. Like other ICT clerical units, the Fresno ICT is also responsible for processing, distributing, and managing other time- sensitive taxpayer correspondence. Submission Processing management assigns Fresno ICT staff for information referral screening as resources are available. Accounts Management maintains the guidelines and facilitates coordination between Submission Processing, Fresno ICT, and the various IRS units, such as SB/SE and W&I, which receive information referrals for follow-up. Tiger Team—established in October 2012 by IRS with membership composed of representatives from Accounts Management, Submission Processing, and IRS audit and investigation operating divisions—is responsible for developing the guidelines for screening and routing information referrals for further action. The Tiger Team meets bimonthly to discuss the guidelines, provide feedback on information referrals misrouted to other IRS units, and receive updates on the number of information referrals received and screened and routed by Fresno ICT. In addition, the Tiger Team receives questions from Fresno ICT regarding how the information referrals should be handled when new situations arise. Following initial processing, IRS operating divisions screen information referrals for audit and investigation potential. The W&I division handles referrals about individual income taxpayers. The SB/SE division handles referrals about small businesses and business income for individual taxpayers. The Large Business and International division handles referrals about large corporations and partnerships as well as international income tax issues. The Tax Exempt and Government Entities (TE/GE) division receives referrals about tax-exempt organizations, employee retirement plans, and government entities. The Criminal Investigation (CI) division receives referrals about schemes and transactions involving larger dollars or numbers of taxpayers. The information referral screening and routing process involves multiple steps, as shown in figure 3. The process begins when a person mails a paper information referral reporting alleged tax noncompliance to IRS in Fresno, California. The process ends with the information referral being retained for destruction or routed by mail to an operating division or other IRS unit for additional review to determine if the alleged wrongdoer may owe taxes or face other enforcement actions. Intake. Information referrals alleging tax noncompliance are initially received in the Submission Processing, Receipt and Control unit (mailroom) of the IRS Fresno campus: mailroom staff date stamp the forms and route them to ICT. Upon receipt in ICT, the information referrals are date stamped a second time and batched in bundles of 25 or less. The inventory of information referrals is stored on a wall of bookcases until Fresno ICT clerical staff are available to screen them. Initial screening. Once assigned to review information referrals, Fresno ICT clerical staff screen the information referral text describing the alleged violation (shown in figure 2). To do this, staff look for key words that signal possible tax law noncompliance as specified in the guidelines. For example, if an information referral alleges unreported income, staff look for the “unreported income” checkbox or key words, such as paid by cash, off the books, or under the table. A referral may allege multiple types of tax noncompliance, and determining the appropriate IRS unit to receive the referral for further action can be subjective. Taxpayer identification research. After initial screening for the tax issues, Fresno ICT clerical staff must determine if the information referral includes a tax identification number (TIN) that identifies the alleged wrongdoer. If a TIN is not included, staff use the IRS integrated data retrieval system (IDRS) to locate the Social Security number for an individual or the employer identification number for a business. For alleged issues with Form 1040 individual tax returns, such as itemized deductions or refundable credits, Fresno ICT clerical staff use IDRS to determine whether the taxpayer is overseen by W&I or SB/SE. After doing the appropriate screening and research, the Fresno ICT employee who performed the work is required to enter his or her identification number on the top left hand corner of the paper information referral. Routing. Upon completion of TIN research, Fresno ICT clerical staff use the guidelines to route information referrals to IRS divisions or other units or to retain them for destruction. After determining the appropriate routing, clerical staff manually sort the paper referrals into 31 boxes—each labeled with key words drawn from the guidelines and the name of the corresponding IRS unit. For example, the box for routing business tax issues to SB/SE audit lists key words for unreported business income, such as “company or corporation,” “rental property,” and “self-employed.” For alleged issues with Form 1040 individual tax returns, such as itemized deductions or refundable credits, Fresno ICT clerical staff sort the information referrals into separate boxes for W&I and SB/SE routing based on the IDRS tax identification research. On an intermittent basis (or when a box is full), Fresno ICT lead staff count the number of information referrals in a box, attach a Form 3210 Document Transmittal, and route the referrals by mail to the designated operating division or other IRS unit for further review. Fresno ICT routes information referrals to 26 different locations across about 15 other IRS operating divisions and units. SB/SE and W&I audit divisions are the most common recipients of information referrals from the Fresno ICT, as shown in figure 4. From fiscal years 2012 through 2015, SB/SE received over 170,000 information referrals and W&I received over 100,000—about two-thirds of all information referrals routed for further review. Retention. Referrals that cannot be routed as instructed by the guidelines are retained for destruction for 90 days. These may include information referrals that do not allege a federal tax issue or do not have key words specified in the guidelines. For example, the referral may allege that the taxpayer owes paid child support, has unpaid state taxes, or is improperly receiving benefits from another federal program. Those retained may also include referrals without the name, address, or TIN where the clerical staff cannot identify the alleged taxpayer. IRS officials told us that some referrals have only vague insinuations of wrongdoing and that a small number of individuals repeatedly submit multiple claims of this type. For example, a few individuals send in bulk referrals with similar allegations about businesses in certain industries or geographic areas. Since fiscal year 2012, the share of referrals sent for destruction has increased, with 25 percent of referrals retained for destruction in fiscal years 2014 and 2015. Once sent to retention, information referrals are boxed and sealed for storage for 90 days. These referrals are not subject to further screening or analysis prior to destruction. After information referrals are routed to the appropriate IRS audit unit, staff within that unit determine if there is a tax noncompliance issue and whether it is worth pursuing in light of available resources and priorities. The first step is classification, which is a process of determining whether a return should be selected for audit and what issues should be audited. The next steps are prioritization and actual return selection. For paper information referrals, these steps are generally a manual process. To classify information referrals, operating units and divisions (such as W&I and SB/SE) review the content of the information referral to determine whether the referral has audit potential—returns for which an audit is most likely to find errors and recommend changes to the reported tax. Factors to be considered at this stage include a clear and reliable issue, strong supporting documentation by the person submitting the information referral, attainment of certain IRS-defined criteria or tolerances (such as dollar thresholds), inclusion of a significant dollar amount in the allegation, and criminal or significant civil tax potential. If the referral does not have audit potential, it is held for the retention period and then destroyed. After classifying those information referrals with audit potential, IRS determines each referral’s priority. The prioritization process varies among IRS operating divisions. W&I and SB/SE both perform correspondence audits—in which a letter is sent to taxpayers asking them to provide information about an item on their tax return. SB/SE also performs field audits—which include face-to-face audits with taxpayers to review their books and records. To prioritize its audits, W&I uses a work plan to determine how many Earned Income Tax Credit (EITC) information referrals to audit each month for its correspondence audits. According to W&I officials, the planned number of referral audits by month is an estimate, and sometimes other non-referral cases may receive higher priority over referrals for audit. If W&I performs fewer referral-related audits than estimated in one month, it will plan to perform more in the following month. SB/SE prioritizes returns for examination based on the merit of the tax issue, not on the source of the information. SB/SE also uses a work plan for its correspondence audits, but does not have targets for numbers of information referral cases to audit. According to SB/SE officials, whether a referral is included in a correspondence audit is dependent on the current inventory level. In contrast, SB/SE field audit uses a project code list to prioritize audits by project issues, such as tax return preparers, offshore transactions, abusive transactions, fraud, and information referrals. According to SB/SE officials, other projects may be a higher priority than information referrals in general, and a referral that involves higher priority issues would be considered higher priority as well. For example, an information referral involving offshore transactions would receive a higher priority than a routine information referral. According to CI officials, CI conducts a general review of all referrals (regardless of the source) in order to evaluate the facts and allegations and to determine if there is a potential criminal lead that warrants an investigation. CI priorities are those referrals that involve high-risk tax noncompliance or financial crimes. Taxpayers are selected for audit based on the amount of audit potential, how they fit into the audit plan, the resources available, and the size of the inventory. According to IRS officials, there are many reasons why information referrals may not be selected for audit. For example, if a referral contains erroneous information (such as a tax return that was incorrectly claimed to be unfiled) it may not be selected. In addition, a referral may not lead to a new audit case if IRS has already identified that return through another selection mechanism. In fiscal years 2012 through 2015, about 4.6 percent of information referrals routed to SB/SE and W&I led to audits. Specifically, about 5.4 percent of information referrals routed to SB/SE and about 3.4 percent routed to W&I led to audits over that period. As shown in tables 2 and 3, approximately 13,000 audits selected based on information referrals resulted in over $209 million in additional tax assessments recommended. Internal control standards can serve as tools to help IRS management ensure that the information referral process contributes to IRS’s mission of treating all taxpayers equitably and with integrity. However, when we compared IRS’s process to these standards, we found that some controls were deficient in their design and implementation. Specifically, we found limitations in oversight structure, documentation of procedures, and monitoring results. These control deficiencies increase the risk that handling of information referrals could fall short of the IRS mission, resulting in inconsistent and inequitable treatment of noncompliance leads submitted by the public. According to federal internal control standards, an agency’s organizational structure provides management’s framework for planning, directing, and controlling operations to achieve agency objectives. A good internal control environment requires that the agency’s organizational structure clearly defines key areas of authority and responsibility and establishes appropriate lines of reporting. IRS does not have an organizational structure for the information referral process with clear leadership responsible for defining objectives in measureable terms to ensure that the objectives of the information referral process aligns with IRS’s mission of fair and equitable application of the tax laws and addressing the tax gap. For example, within the current structure, Accounts Management and the Fresno ICT have separate responsibilities focused on routing information referrals to other IRS units, but Accounts Management and the Fresno ICT are not responsible for determining the outcomes of the information referrals. Also, the Tiger Team is not tasked with tracking how many referrals undergo follow-up and results achieved, such as detecting noncompliant taxpayers and potential revenue that would otherwise go uncollected. In an effort to determine measureable outcomes for the information referral process, in 2013 TIGTA recommended that W&I assess the value of the information referral process to reassess the emphasis placed on that process, and prioritize it as needed. IRS had agreed to assess the value of the process after implementing other changes TIGTA recommended in a prior report which included revising the form and its instructions and developing guidelines for screening and routing referrals. As of December 2015, IRS had not taken action on this recommendation. In November 2015, W&I leadership explained that it would not be cost-effective for Accounts Management and Fresno ICT to track the information referrals and determine outcomes achieved by the various divisions and other units that receive the forms when ICT is only processing the forms. According to IRS officials, Fresno ICT clerical staff spent over 4,900 hours on information referral screening in fiscal year 2015, but those hours do not reflect Submission Processing and Accounts Management time for the coordination activities and maintaining the guidelines. Although IRS has not assessed the value of the information referral process, according to IRS’s Audit Information Management System data, audits based on information referrals resulted in at least $62 million in recommended tax assessments for fiscal year 2014 (the most recent complete year available). W&I and SB/SE received nearly two-thirds of information referrals routed out of Fresno but W&I and SB/SE do not track their costs for screening and classifying paper information referrals for audit consideration. The IRS realignment of compliance operations has led to additional processing steps within the W&I and SB/SE units that process information referrals. In November 2014, IRS realigned compliance operations across its W&I and SB/SE operating divisions. As a result of this realignment, all EITC and pre-refund compliance programs are now carried out by W&I, and all other discretionary programs moved to SB/SE. At the end of fiscal year 2015, the guidelines still directed Fresno ICT to send information referrals involving individual taxpayers without business or self-employed income to W&I and all referrals for individual taxpayers with business income to SB/SE. According to IRS audit officials, W&I and SB/SE are taking an additional step to physically exchange information referrals routed from Fresno between their correspondence audit units in Andover, Massachusetts. As of December 2015, W&I was routing information referrals other than those for the EITC to SB/SE, and SB/SE was routing all EITC referrals to W&I. According to IRS officials, W&I and SB/SE have agreed to conduct a test to track receipts of EITC and non-EITC information referrals and to discuss the results in early 2016 to determine the next steps for the routing of the information referrals. The fragmented structure of oversight and management of the information referral screening process coupled with dispersed responsibility for follow-up activities throughout IRS also complicates determining how much IRS spends in total on screening paper information referrals and mailing the paper forms back and forth between IRS locations. Without clear leadership and responsibility for defining program objectives and measuring outcomes resulting from information referrals about tax noncompliance by individuals and businesses, IRS does not know the costs of the information referral process or how effectively that process is contributing to the agency’s mission and addressing the tax gap. Federal internal control standards call for all transactions and other significant events to be clearly documented and available for examination. The documentation should appear in management directives, administrative policies, or operating manuals and may be in paper or electronic form. All documentation and records should be properly managed and maintained. IRS requires primary sources of guidance and procedures with an IRS-wide or organizational impact to be included in the IRM. This requirement is intended to ensure that IRS employees have the approved policy and guidance they need to carry out their responsibilities in administering the tax laws. However, we found gaps in documentation of procedures for ICT clerical screening and routing of information referrals as well as for updating and distribution of screening and routing guidelines. IRS has incomplete documentation of procedures for screening and routing information referrals to other IRS units for further action. Since 2012, IRS has developed written screening and routing guidelines documenting which tax issues to mail to specific points of contact in other IRS units. Accounts Management distributes these guidelines via the Servicewide Electronic Research Program, which provides employees with access to current IRMs, interim procedural guidance, and reference materials. In November 2015 Submission Processing issued IRM procedures for ICT clerical operations effective January 1, 2016. However, the revised IRM does not have procedures for the Fresno ICT clerical screening and routing of information referrals. Specifically, the Fresno ICT does not have documented procedures to ensure that labels used by the clerical staff in sorting screened referrals into the various boxes for routing are consistent with the guidelines developed by the Tiger Team. The Fresno ICT staff developed the box labels as an onsite job aid, and the Fresno ICT managers incorporated the box numbering job aid in training for the 2015 filing season temporary clerical staff. However, discrepancies between the key words on the box labels and the guidelines have resulted in misrouting. For example, we observed that labels indicated to route referrals alleging unreported retirement and tax- exempt bond income to the TE/GE division. However, the guidelines direct routing individual income misreporting allegations to the W&I and SB/SE divisions. TE/GE officials confirmed that the ICT incorrectly routes referrals about individuals misreporting income to their unit. They also said that the number of misroutes had decreased as misroutes were discussed at Tiger Team meetings and Fresno ICT clerks received additional training on screening referrals. The revised IRM lists the Form 3949-A screening and routing guidelines as a document but does not have procedures for the Fresno ICT regarding (1) applying the guidelines, (2) inventory management reporting,(3) monitoring referrals retained for destruction, and (4) feedback on the misrouted referrals. In contrast, IRM procedures for the correspondence scanning performed by ICT clerical staff detail steps on removing staples from paper to be scanned and the daily tasks for cleaning the scanners. The lack of documented procedures clearly linking the guidelines to the physical logistics of clerical routing of paper forms increases the risk that the guidelines will not be implemented consistently. We also found deficiencies in maintaining and communicating the guidelines to the clerical staff. According to an Accounts Management official, in the fall of 2014, the Accounts Management unit inadvertently distributed an outdated copy of the guidelines that did not incorporate clarifications added to aid the Fresno ICT staff with clerical screening of information referrals involving child identify theft (discussed further below). According to Accounts Management officials, they are working to ensure that updated guidelines are communicated in a timely basis to the clerical staff. At the June 2015 coordination meeting, the Fresno ICT managers said that they were not aware that the Accounts Management unit had distributed updated guidelines on the Servicewide Electronic Research Program. As a result, the Fresno ICT managers did not distribute copies of the latest guidance for the clerical staff to follow. According to IRS officials in September 2015, the Accounts Management unit began to electronically distribute updated guidelines each month and to alert the Fresno ICT managers by email so they can print paper guidelines for the clerical staff. However, we noted that the revised IRM for ICT operations does not have procedures for this communication practice. In December 2015, when we discussed this with IRS officials, they agreed that the IRM should document the guidelines and procedures for the Fresno ICT clerical staff for screening and routing information referrals. In addition, inadequate controls over maintaining and communicating the routing guidelines coupled with incomplete procedures linking the routing guidelines to the routing boxes have contributed to clerical confusion and errors that resulted in IRS erroneously destroying information referrals from taxpayers reporting child identity theft. In March 2015, the ICT clerical staff we observed in Fresno said that they were uncertain how to route such referrals. The clerical staff we interviewed said that the label on the identity theft routing box originally specified child identity theft as key words for screening and routing, but those key words were omitted at some point in updating the box label. Although the routing guidelines for screening and routing identity theft referrals approved by the Tiger Team had not changed, some clerical staff stopped routing child identity theft referrals and instead retained those for destruction. We were not able to quantify how many information referrals reporting child identify theft may have been erroneously destroyed between the fall of 2014 and March 2015. During our audit visit, we reviewed the most recent box of 155 information referrals retained for destruction and identified 12 information referrals reporting potential child identity theft or misuse of a child’s tax identification number where the parent or custodian did not know who was using the child’s identity. During a meeting with Fresno ICT managers regarding the 12 information referrals, the ICT managers said that they determined that 4 of the 12 referrals were correctly retained for destruction because the individual submitting the referral did not specify another tax issue and did not know who used the child’s identity. Seven of the 12 referrals should not have been retained for destruction and were subsequently routed by the Fresno ICT as stolen refund cases, and one of the 12 referrals alleging identity theft by a tax return preparer was routed to the Return Preparer referral program. During the June 2015 coordination meeting that we observed, Accounts Management officials orally directed the Fresno ICT managers to instruct the clerical staff to route referrals about child identity theft. The Fresno ICT managers instead requested that Accounts Management and Tiger Team clarify the guidelines for child identity theft reports to avoid further confusion. In August 2015, the clarification was added to the routing guidelines. Inadequate controls over the guidelines without clearly documented clerical review procedures pose the risk that clerical staff may apply outdated or inaccurate routing guidelines. Misrouting the information referrals causes delays and added cost for IRS in getting referrals to the appropriate unit for follow-up. Information referrals inappropriately retained for destruction may compromise IRS’s ability to combat tax noncompliance reported by the public and to assist identity theft victims. According to federal internal control standards, effective management of an organization’s workforce—its human capital—is essential to achieving results. Qualified and continuous supervision should be provided to ensure that internal control objectives are met. Internal control standards also require key duties and responsibilities to be divided or segregated among different people to reduce the risk of error or fraud. This should include separating the responsibilities for authorizing transactions, processing and recording them, reviewing the transactions, and handling related assets. No one individual should control all key aspects of a transaction or event. According to Fresno ICT management, due to limited experienced staff for the information referral process, management relies on one lead clerk to keep track of the information referral inventory. That Fresno ICT lead clerk is responsible for several key inventory reporting duties which include documenting the number of referrals received, counting and mailing those routed to other IRS units, and compiling the weekly inventory reports. The weekly inventory reports compiled by the Fresno ICT are used by Submission Processing, Accounts Management, and other IRS units participating in the Tiger Team to track the volume of information referrals received, referrals waiting in inventory, and volumes routed to other units. Inventory information is used by the Fresno ICT management in assigning clerical staff The Fresno ICT has not trained additional lead clerks to compile the inventory reports. Given the fragmented organizational structure and shared use of the weekly report, it is unclear whether the Accounts Management unit or the Fresno ICT is responsible for documenting procedures on how to prepare the inventory reports. In addition, IRS officials explained that due to time constraints and other priorities in the Fresno ICT, information referral inventory reports are not reviewed by a supervisor before relaying the weekly report to Accounts Management. During our visit, we tested several weekly reports to the transmittal forms that document the number of information referrals that are routed to other units. Developing the reports involves several calculations to document the number of information referrals received and routed to each of the other operating units. We identified errors in tallying the counts of referrals retained, recording the number of referrals sent to each IRS unit, and calculating the total number routed. The lack of supervisory review and segregation of duties in preparing information referral inventory reports can lead to errors in developing these reports which are used by other IRS operating units. According to federal internal control standards, ongoing monitoring should occur in the course of normal operations. It is performed continually and is ingrained in the agency’s operations. It includes regular management and supervisory activities, comparisons, reconciliations, and other actions people take in performing their duties. We found that IRS does not have documented procedures for supervisory review for screening referrals retained for destruction, although one- quarter of information referrals received in fiscal years 2014 and 2015 were destroyed rather than routed for follow-up. The ICT quality review staff are to sample all ICT work streams, but we could not determine which, if any, retained referrals had been reviewed under that process. According to the Fresno ICT managers and lead clerical staff, the lead clerk is to conduct a limited visual review of the referrals retained for destruction before sealing and dating each storage box, but we found no documentation of such reviews. Also, IRS does not track common reasons as to why referrals are not routed for follow-up. Information referrals inappropriately retained for destruction may compromise IRS’s ability to combat tax noncompliance reported by the public and to assist identity theft victims. According to the Fresno ICT manager, prior to our March 2015 site visit, the Fresno ICT conducted an ad hoc quality review of referrals retained for destruction and had trained all clerical staff on the routing guidelines. According to the Fresno ICT manager, the clerical staff who do not screen information referrals regularly—such as temporary seasonal staff during the tax filing season—are less familiar with the guidelines and are more likely to incorrectly retain referrals for destruction. In response to the ad hoc review by Fresno ICT, the ICT removed and routed for follow-up several hundred referrals that otherwise would have been destroyed. During our March 2015 visit to the IRS ICT office in Fresno, California, we reviewed a nongeneralizable sample of 38 information referrals (screened during early January 2015 through mid-March 2015) retained for destruction. Even though IRS had conducted an ad hoc quality review of referrals retained for destruction prior to our visit, we questioned IRS officials about 10 of the 38 retained referrals we reviewed. These 10 referrals lacked documentation of clerical screening and tax identification number research or did not appear to follow the routing guidelines. IRS managers subsequently determined 4 of the 10 should not have been retained for destruction. Three referrals were routed to other IRS units after clerical staff completed TIN research and re-screened. One Spanish language referral had not been screened and was sorted for screening by a clerical staff person knowledgeable of Spanish. The ad hoc review conducted by the Fresno ICT in March 2015 saved hundreds of referrals from destruction, but without periodic monitoring of the reasons for referrals being retained, IRS is missing an opportunity to identify patterns in retention errors. Analysis of referrals retained before destruction could help identify clerical staff errors that may be addressed by better documenting procedures. For example, the screening and routing guidelines do not have a procedure for referrals in languages other than English. The Fresno ICT has set up two boxes labeled for Spanish and other languages, respectively. However, clerical staff may not know to sort other language referrals in those boxes, and lead clerks may not regularly check those boxes because they are not part of the weekly routing mail and inventory report. Without procedures for reviewing information referrals retained for destruction, some referrals may be inappropriately retained which may compromise IRS’s ability to combat tax noncompliance reported by the public. Federal internal controls and all transactions and other significant events need to be clearly documented, and the documentation should be readily available for examination. The documentation should appear in management directives, administrative policies, or operating manuals and may be in paper or electronic form. All documentation and records should be properly managed and maintained. Officials from the Accounts Management unit and the Fresno ICT told us that they rely on misrouted information referrals that are returned by other IRS units as feedback about the quality of the screening and routing process. The weekly inventory management and routing report documents the number of misroutes returned by various IRS units. The Fresno ICT uses the misrouted referrals to provide feedback to clerical staff about specific errors and then re-routes the paper referrals by mail to the appropriate unit for follow-up. IRS units that return misroutes to the Fresno ICT use an IRS transmittal form to document the return of misroutes to the ICT. The transmittal form also reports the number of referrals incorrectly sent to the IRS unit. This feedback process adds costs of mailing misroutes back and delays in re-routing referrals for follow-up. For fiscal year 2016 as of December 2015, W&I returned 625 misrouted information referrals by mail to Fresno. According to Accounts Management and Fresno ICT officials, misrouted referrals involving business income issues are to be re-routed by mail to SB/SE. Although Accounts Management and the Fresno ICT rely on misrouted information referrals as feedback about the quality of the routing process, the IRM does not contain procedures on handling information referral misroutes. The total number of misroutes by the Fresno ICT is unknown as not all misroutes are properly identified and recorded. Information referral inventory reporting showed nearly 1,400 referrals (approximately 2 percent) were initially misrouted in fiscal year 2015. The Accounts Management unit and Fresno ICT officials said these data reflect when operating units return misrouted referrals back to the ICT. However, some units have not returned information referrals misrouted by the Fresno ICT and instead forwarded misroutes directly to other units rather than mailing misroutes back to the ICT for re-routing. According to SB/SE officials, the SB/SE division previously forwarded misrouted information referrals to other units but now returns misroutes to the Fresno ICT. TE/GE officials told us that they forward misrouted information referrals to other units rather than mailing misroutes back to the ICT for re-routing. The lack of documented guidance on handling information referral misroutes poses the risk that IRS may be missing opportunities to identify the number and types of misroute errors and analyze ways to reduce misrouting. We found that fragmentation and overlap characterize the IRS system used for public reporting of tax noncompliance and potential fraud. Fragmentation can sometimes result in duplication of efforts, and inefficient use of resources. Multiple and uncoordinated forms and instructions can confuse the public trying to submit information to IRS. Such conditions can also create rework for IRS in routing information for specialized review. Gaps and delays in routing and redirecting information among referral programs could hamper IRS’s pursuit of some tax noncompliance and potentially leave taxpayers vulnerable to issues such as identity theft or abusive transactions. As we have previously reported, fragmentation and overlap can have potentially positive effects, such as when programs work together to provide services or when the overlap is planned so that the public is receiving services in a coordinated manner. In addition to the general information referral process, we identified eight other specialized referral programs, as shown in table 4. Several of the referral programs have their own forms and their own mechanisms for intake and screening. Fragmentation and overlap across a mix of IRS external referral programs and processes create duplication of effort, contribute to inefficient use of resources, and may be confusing to individuals submitting referrals. We cannot quantify the extent of duplication when a person submits the same information to more than one program or submits duplicate information referrals because IRS has no way to track information across the multiple referral programs. However we did identify some issues: The public submits referral information on the wrong form or to the wrong office. Although IRS revised the information referral instructions in March 2014 to help clarify how to submit specialized forms directly to other referral programs (as shown in table 4), Fresno ICT continues to erroneously receive information referrals that must be routed to those referral programs and the Whistleblower Office. For example, Fresno ICT routed more than 2,900 information referrals related to identity theft and return preparer misconduct in fiscal year 2015. Also, some individuals mistakenly mail information referrals to the SB/SE Abusive Transactions referral program instead of the Fresno ICT address specified on the information referral form. Some individuals submit multiple forms for the same allegation. IRS officials stated that several programs could receive the same referral for processing. For example, some whistleblowers submit both the information referral Form 3949-A and the whistleblower claim Form 211, either together as a package or separately to both Fresno ICT and the Whistleblower Office. Routing between referral programs results in delays and added costs for re-screening. For specialized referrals submitted as information referrals, the Fresno ICT first screens and routes the referrals to other IRS referral programs where the referrals again undergo intake and screening. Similarly, staff from the SB/SE Abusive Transactions program must screen and mail the information referrals to the Fresno ICT for processing. Form 3949-A referrals submitted directly to the E-file program—discussed further below—are first screened in IRS scheme detection centers and then mailed to Fresno ICT for information referral processing. According to federal internal control standards, management should ensure there are adequate means of communicating with, and obtaining information from, external stakeholders that may have significant impact on achieving goals. Effective information technology management is critical to achieving useful, reliable, and continuous recording and communication of information. IRS does not have a mechanism to facilitate information sharing across the multiple referral programs used for handling tax noncompliance and other issues reported by the public. While Accounts Management uses the Tiger Team to enable communication and coordination with other IRS units that receive information referrals, IRS does not consistently draw on this vehicle in coordinating information referral activity. For example, we found that officials from one referral program (tax exempt organizations) attend the Tiger Team meeting. Officials from other referral programs were not aware of the Tiger Team, met separately with Accounts Management officials, or did not attend the Tiger Team meeting. According to an Accounts Management official, the Tiger Team was established to address misrouted Form 3949-A information referrals and to respond to inquiries from IRS units that receive most of the information referrals—specifically, SB/SE and W&I. The SB/SE Abusive Tax Transactions program does not receive information referrals routed from Fresno and thus does not have staff participate in Tiger Team discussions on information referral routing guidelines. However, the Abusive Tax Transactions program does receive information referrals that are mistakenly mailed by the public to the Abusive Tax Transactions mailing address. According to an official from the Abusive Tax Schemes programs, they must then screen and forward the erroneously submitted information referrals to the IRS Fresno office for appropriate screening and routing. This referral program had not reached out through the Tiger Team or to Accounts Management to determine and resolve why the public is mailing information referrals to their program. The E-file program does not participate in the Tiger Team or in discussions on developing the information referral routing guidelines. However, the E-file program began using Form 3949-A for its own referral program for reporting fraudulent or abusive tax returns. Accounts Management officials were not previously aware of this use of the information referral form until we brought it to their attention. According to an Accounts Management official, this gap in coordination on the use of the form could result in some e-file related information referrals mailed to Fresno being destroyed. In the past, our work has found that mechanisms or strategies to coordinate programs that address crosscutting issues may reduce potentially fragmented, overlapping, and duplicative efforts. Some of the specialized referral programs with overlapping responsibilities already have some formal means of coordinating on crosscutting issues and sharing information on related referrals. For example, the Return Preparer Office and SB/SE Abusive Transactions regularly coordinate on abusive transactions involving tax preparers and can access common electronic information systems to identify overlapping referrals. The Return Preparer Office also shares information with the Identity Theft Program on identity theft, a crosscutting issue. For example, identity theft can involve another IRS unit and a different referral form, such as the Identity Theft Program and the Return Preparer Office based on the type of identity theft. If the taxpayer is alleging that they are an actual or potential victim of identity theft, the specific Identity Theft referral form should be used to report the allegation. If the taxpayer is alleging a return preparer filed a return or altered their return without their consent, the Return Preparer Office referral form should be used to report the allegation. Other referral programs using specialized forms also have practices that could improve information referral processing, as shown in table 5. Improving the referral intake process through improved collaboration and coordination could benefit both IRS and the public. Specifically, harmonizing referral forms and instructions to avoid duplicate and misdirected filings may improve efficiency and help to reduce public confusion and administrative burden. For example, an IRS mechanism for coordinating referrals could explore electronic fax (e-fax) as a method to improve efficiency of referral intake. In fiscal year 2015, IRS received over 87,000 information referrals submitted on Form 3949-A or as letters. As previously mentioned, IRS only accepts paper information referrals, which must be mailed to the Fresno office where the Image Control Team manually sorts and routes the form to other IRS units for further review. In contrast, five referral programs with forms—Identity Theft, SB/SE Abusive Transactions, Return Preparer Office, TE/GE Exempt Organizations, and E-file program—allow the public to submit referrals by fax. In 2013, IRS briefly explored and rejected the e-fax option for information referrals because the Image Control Team screening clerks at that time did not have computers to access fax submissions. Since December 2014, Fresno Image Control Team clerks have had access to computers to perform TIN research for information referrals; however, IRS has not revisited the e-fax option. Although IRS officials stated they do not believe e-fax for information referrals is feasible due to the large volume received, other IRS units collaborating across referral programs could provide lessons learned or suggestions for streamlining the intake process through e-fax or other options. Without a broader collaborative mechanism to communicate across its referral programs and collaborate on practices for receiving and screening referrals, IRS may be missing opportunities to leverage resources, streamline intake processes, and address challenges arising from the fragmented and overlapping referral programs. Resolving the inefficiencies of IRS’s paper-based information referral system poses a unique challenge for management, given resource constraints, the complexity of current processes, and the need to protect taxpayer information. We have previously identified a number of management approaches that may enable IRS to consolidate the referral intake and screening process and improve efficiency, including implementing process improvement methods and technology improvements that improve efficiency, increase product quality, and decrease costs. Process improvement methods can involve examining processes and systems to identify and correct costly errors, bottlenecks, or duplicative processes while maintaining or improving the quality of outputs. Providing information to policymakers on how to improve efficiency and reduce and better manage fragmentation, overlap, or duplication can help alleviate some of the government’s fiscal pressures and improve program effectiveness. Another component of improved efficiency involves identifying, developing, and implementing strategies that streamline the reporting of tax noncompliance while appealing to the public. According to IRS’s Strategic Plan for fiscal years 2014 through 2017, the public has a preference for Internet-based service over other service channels such as phones, paper, or in-person. IRS says it is committed to expanding its portfolio of digital service offerings to meet customer expectations while continuing to keep taxpayer data secure. State tax agencies and other federal agencies accept fraud referrals via online Internet submission. For example, among seven of the most populous states, five provided an online reporting option to report allegations of tax fraud and evasion. Similarly, the Social Security Administration has an online form for reporting fraud via its website. Finally, strengthened collaboration across the nine referral programs could enable IRS to explore a more systemic online referral submission process: such an effort could help the agency improve its ability to more efficiently receive and process information referrals, while also reducing the public confusion caused by trying to choose among multiple forms. Currently, IRS’s specialized referral forms to report alleged tax noncompliance are received within IRS units through different channels (mail, fax, email). As discussed earlier, the public often uses information referrals because it is a general form that is used to report different tax noncompliance; however, we found that information referrals are misrouted and often retained for destruction. According to a Return Preparer Office official, the Return Preparer Office is exploring conversion of its specialized referral form to an online form. However, if the various referral programs separately explore developing online form submission, IRS risks replicating or compounding the fragmented mix with multiple referral forms and means of submission. An IRS official stated that a universal online referral intake system to control the routing of referrals would be preferable to separate systems for each referral form. Streamlining referral submission could be less cumbersome for the public and could reduce delays and rework in re-routing information to specialized referral programs. According to IRS officials, more electronic submissions is a vision for the future but funding is limited. However, officials stated that committing resources for referral capacity is in the queue behind direct taxpayer account services. An IRS plan and timeline for developing a consolidated, online referral submission could assist IRS in leveraging specialized expertise to further consolidate the referral intake process. Information referrals are a key mechanism for the public to report potential tax noncompliance and aid IRS in addressing the tax gap. Audits of individuals and businesses based on information referrals resulted in at least $62 million in recommended assessments in fiscal year 2014. However, IRS oversight and management of its information referral screening process is fragmented across multiple IRS units, with the actual handling of referral follow-up further dispersed across operating divisions, multiple referral programs, and other IRS units. Although IRS has guidelines for screening and routing the information referrals, it does not have an organizational structure for the Form 3949-A information referral process that identifies responsibility for defining program objectives and an appropriate line of reporting for measuring results. Without such a structure, IRS cannot ensure accountability in the referral process or determine how effectively it is using resources in this area. IRS has not consistently documented and implemented procedures for the information referral process. Procedures are not clearly documented for the screening and routing guidelines and changes are not consistently communicated to relevant staff, resulting in referrals being misrouted or inadvertently destroyed and errors in inventory management. In addition IRS does not have procedures for monitoring information referrals retained for destruction or those that are incorrectly routed. Without adequate internal controls, IRS cannot effectively manage the information referral process. Documenting and implementing procedures for the information referral process would help IRS ensure that the process is implemented consistently. IRS has a fragmented and overlapping system for the public to report tax noncompliance, with several units having their own forms and mechanisms for intake and screening. Multiple referral forms and instructions may contribute to inefficient use of IRS resources. In addition, IRS does not have a mechanism for coordinating referral issues across the multiple programs used for handling tax noncompliance. Without a coordination mechanism, IRS may be missing opportunities to leverage resources and address challenges from the multiple referral programs. Choosing to stay with the paper information referral means that the manual screening of the waiting inventory must compete with other IRS staffing resources needed for scanning and relaying time-sensitive taxpayer correspondence. Strengthened collaboration across the referral programs could enable IRS to explore an online referral submission process which could help the agency improve its processing of information referrals. Without a mechanism to coordinate on a plan and timeline for developing a consolidated, online referral submission, IRS cannot receive referrals efficiently or meet its strategic goal of expanding its portfolio of digital service offerings to the public. We recommend the Commissioner of Internal Revenue take the following seven actions: Establish, document, and implement an organizational structure identifying responsibility for defining objectives with an appropriate line of reporting for measuring costs and results for information referrals. Ensure that the IRM has internal controls for processing information establishing, documenting, and implementing procedures for maintaining and communicating the information referral screening and routing guidelines to ICT and IRS units receiving information referrals as well as procedures for ICT screening and routing operations; establishing, documenting, and implementing supervisory review and segregation of duties for inventory management reporting procedures; establishing, documenting, and implementing ongoing monitoring of information referrals retained for destruction, including a mechanism for tracking the reasons referrals were retained prior to destruction; and establishing, documenting, and implementing procedures for each IRS operating unit receiving information referrals to provide feedback on the number and types of referrals misrouted and on their disposition, and a mechanism to analyze patterns of misroute errors. Establish a coordination mechanism to facilitate communication and information sharing across IRS referral programs on crosscutting tax issues and ways to improve efficiency in the mechanisms for public reporting of possible tax violations. Direct the referral programs to establish a mechanism to coordinate on a plan and timeline for developing a consolidated, online referral submission in order to better position IRS to leverage specialized expertise while exploring options to further consolidate the initial screening operations. We provided a draft of this product to the Commissioner of Internal Revenue for comment. The IRS Deputy Commissioner for Services and Enforcement provided written comments dated February 8, 2016, which are summarized below and reprinted in appendix III. In an email received February 9, 2016, IRS indicated through the Office of Audit Coordination that it generally agreed with our recommendations. In its letter, IRS stated that our report identified several opportunities for improving the information referral process, and in response, IRS set up a new cross-functional working group to develop a streamlined, coordinated, and efficient process with appropriate internal controls. IRS also plans to explore the feasibility of a single referral form and consider offering a secure online option for the public to submit referrals to IRS. IRS stated that it is identifying the specific actions, responsible officials, and implementation timelines to address our recommendations. As agreed with your offices, unless you publically release its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. The report will also be available at no charge on the GAO website at http://www.gao.gov. If you, or your staff, have any questions about this report, please contact me at (202) 512-9110 or at lucasjudyj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. You asked us to assess the overall effectiveness of the Internal Revenue Service’s (IRS) information referral process. This report (1) describes IRS’s process for screening and routing Form 3949-A information referrals and for prioritizing information referrals within the IRS audit workload; (2) assesses the controls for the information referral screening and routing process; and (3) evaluates the coordination between the information referral process, the Whistleblower Office, and other IRS referral programs. For the first objective, we reviewed IRS documents, to the extent that they were available, describing the information referral screening, routing, classifying, prioritizing, and selection of information referrals for audit. The documents include the Internal Revenue Manual (IRM), Form 3949-A Screening and Routing Guidelines, operating division referral processing flowcharts, organizational charts, and training materials. We interviewed IRS officials responsible for maintaining the screening and routing guidelines and those overseeing the clerical screening and routing process, and in March 2015 we observed the information referral screening process in Fresno, California. We reviewed coordination meeting agendas and minutes and observed coordination meetings between the officials responsible for the screening process and with other IRS unit officials. In addition, we analyzed inventory data on the volumes of referrals received and routed to operating divisions for fiscal years 2012 through 2015. Based on testing of the data and review of documentation and interviews, we identified several weaknesses in the information referral inventory reporting including minor miscalculations of referral routing totals and lack of supervisory review but determined that the data were reliable for the purposes of this report. We also interviewed IRS audit officials in the Wage and Investment (W&I) and Small Business/Self-Employed (SB/SE) operating divisions because they receive about two-thirds of the referrals sent for screening, classifying, prioritizing, and audit selection. We also interviewed IRS officials from the Criminal Investigation division which receives referrals involving possible large dollar value or broader schemes. We analyzed data from the Audit Information Management System for W&I and SB/SE referral audits closed and the recommended tax assessments from fiscal year 2012 through 2015 (as of August 2015, the latest data available). We compared the results of our analyses of data to the tabulations provided by W&I and SB/SE to assess consistency of the results. Based on our testing of the data and review of documentation and interviews, we determined that these data were reliable for the purposes of this report. For the second objective, we reviewed existing internal controls for the information referral screening process and assessed whether the procedures aligned with relevant Standards for Internal Control in the Federal Government. To assess how IRS implements its procedures and controls, we used IRS’s Form 3949-A screening and routing guidelines and other procedure documents as criteria. We reviewed inventory data including data on misrouted referrals returned by other IRS units for fiscal years 2012 through 2015, and we interviewed IRS officials on misrouted referrals. Based on our review of the data and interviews, we determined that the misroute data were not reliable for the purposes of this report because the total number of misroutes is unknown as not all misrouted referrals are properly identified and recorded. We also reviewed a nongeneralizable sample of 38 referrals retained for destruction to check whether the documentation followed the procedures. We selected a systematic sample of these referrals from among 5,935 referrals retained for destruction from January through March 2015; we selected about every 150th referral among those boxed for destruction. To follow-up on information obtained during the site visit regarding ICT clerical staff screening of referrals alleging child identity theft, we identified 12 referrals that were related to child identity theft out of 155 referrals that were retained for destruction in March 2015. The 12 referrals were referrals reporting identify theft by a parent/custodian of a child’s identification number where the parent/custodian did not know who was using the child’s identity. We did not include cases that involved custodial issues about eligibility to claim a dependent which are not considered identity theft. Finally, we interviewed IRS officials about the processes and controls for the routing guidelines and screening process, and to discuss any potential deficiencies we identified. For the third objective, we interviewed IRS officials to determine the extent of coordination between the information referral process and other IRS external referral programs. Specifically, we reviewed referral programs with forms for reporting identity theft, fraud by tax return preparers, abusive tax promotions, and misconduct by tax-exempt organizations. We drew on information and analysis from our October 2015 report on the IRS Whistleblower Office. We analyzed and compared the Form 3949-A, Form 211, Application for Award for Original Information, (Whistleblower Office form), and five other public referral forms to identify common information items as well as information specific to the various referral programs. We reviewed the IRS web page on reporting tax noncompliance. We also reviewed other IRS web pages that identified other referral programs that handled issues for the E-file program, employee plans and tax shelters. We reviewed the IRM and other IRS guidance and interviewed IRS officials for referral programs with forms to determine how the other referral programs receive and screen their referrals and also process information referrals. We reviewed the Standards for Internal Control in the Federal Government and our prior reports on interagency collaboration, which discuss key practices and considerations for implementing collaborative mechanisms. We also drew on our April 2015 guide on fragmentation, overlap, and duplication. We interviewed IRS officials about options to coordinate or consolidate referral form intake in order to address areas of potential fragmentation, overlap, and duplication. We conducted this performance audit from December 2014 to February 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, MaryLynn Sergent (Assistant Director), LaSonya Roberts, (Analyst in charge), Jehan A. Chase, Deirdre Duffy, Steven Flint, George Guttman, Laurie C. King, James R. McTigue, Donna L. Miller, and Cynthia M. Saunders made key contributions to this report.
Reports by the public of suspected underreporting of taxes or other tax violations can help IRS detect millions of dollars in taxes that would otherwise go uncollected. Productive referrals can help address the net $385 billion tax gap—the difference between the amount of taxes paid voluntarily on time and the amount owed. IRS received about 87,000 information referrals in fiscal year 2015. GAO was asked to assess the overall effectiveness of the information referral process. This report (1) describes IRS's process for screening and routing information referrals; (2) assesses the controls for the information referral screening and routing process; and (3) evaluates the coordination between the information referral process, the Whistleblower Office, and other IRS referral programs. GAO reviewed IRS guidance, processes, and controls for the information referral process, assessed whether IRS's processes followed Standards for Internal Control in the Federal Government , and interviewed IRS officials. Information referrals from the public alleging tax noncompliance must be submitted on paper forms by mail to the Internal Revenue Service (IRS). These referrals are manually screened by clerical staff and routed by mail to units across IRS for further action, as shown in the figure. Ineffective internal controls undercut IRS management of the information referral process. IRS does not have an organizational structure for information referrals with clear leadership for defining objectives and outcomes for measuring cost-effectiveness and results. Without clear leadership, IRS does not know how effectively it is leveraging information referrals to address the tax gap. IRS has incomplete documentation of procedures for the information referral process, increasing the risk of delays and added costs in routing the information for further action. Although one-quarter of the information referrals in fiscal year 2015 were sent for destruction after screening, IRS has not documented procedures for supervisory review of those referrals prior to destruction. Without procedures to address these control deficiencies, IRS is compromised in its ability to know how effectively it is leveraging tax noncompliance information reported by the public. Fragmentation and overlap across IRS's general information referral process and eight specialized referral programs, such as for reporting identity theft and misconduct by return preparers, can confuse the public trying to report tax noncompliance to IRS. Yet coordination between referral programs is limited, and IRS does not have a mechanism for sharing information on crosscutting issues and collaborating to improve the efficiency of operations across the mix of referral programs. As a result, IRS may be missing opportunities to leverage resources and reduce the burden on the public trying to report possible noncompliance. GAO recommends, among other things, that IRS establish an organizational structure that identifies responsibility for defining objectives and for measuring results for information referrals; document procedures for the information referral process; and establish a coordination mechanism across IRS referral programs. IRS agreed with GAO's recommendations.
INS processed approximately 1.3 million citizenship applications between August 31, 1995, and September 30, 1996; 1,049,867 of the applicants were naturalized. During this period, INS initiated a number of efforts, under a program called “Citizenship USA,” to accelerate and streamline its process for naturalizing citizens. In its December report, KPMG stated that while INS’ efforts greatly increased the volume of applicants who were processed and approved, the potential for error also increased during this period. In an effort to determine if past naturalization efforts were adjudicated correctly, INS reviewed selected naturalization cases approved between August 31, 1995, and September 30, 1996. EOIR was to provide quality assurance assistance for INS’ review. KPMG, under contract with JMD, monitored and validated INS’ review. A primary naturalization criterion is that applicants must be able to establish good moral character to become naturalized citizens. Under certain circumstances, applicants who fail to reveal their criminal histories or who have been convicted of certain crimes, such as crimes involving moral turpitude (e.g., certain felonies and certain misdemeanors), cannot, by statute, establish good moral character. To judge if any citizenship applicants have failed to establish good moral character, INS, with assistance from the FBI, was to identify those applicants who have criminal histories. Previously, to identify these applicants, INS required that aliens submit fingerprint cards with their applications for naturalization. Each fingerprint card was to include a complete set of fingerprints and other identifying information, such as the alien’s name and date of birth. INS was to send each fingerprint card to the FBI for it to determine if an alien had a criminal history record on file. Part of the naturalization process was to include an interview between an INS adjudicator and the applicant. The interview, which is done under oath, was to include a discussion about any criminal history of the applicant—that is, arrests or convictions—which should be available at the time of the interview. To judge if naturalization cases that were processed between August 31, 1995, and September 30, 1996, were adjudicated correctly and if the naturalization process had adequate controls, INS reviewed selected cases to judge, on the basis of the information in the files, if the naturalized citizens were of good moral character. INS, with the FBI’s assistance, identified 80,856 criminal histories for applicants believed to be naturalized during this period with records that included felonies, misdemeanors, or INS administrative arrests or convictions. An aspect of this review was to identify aliens who may not have revealed their arrests or convictions. After reviewing criminal histories provided by the FBI, INS identified 17,257 applicants who were naturalized between August 31, 1995, and September 30, 1996, with criminal history records of arrests for felonies or other potentially disqualifying crimes. To conduct the review, INS requested the 17,257 case files from its field offices. Only 16,858 of the requested case files were reviewed because INS field units could not locate 399 case files. Accordingly, INS reviewed 16,858 criminal histories and corresponding case files in an attempt to judge if these aliens should have been naturalized. Under KPMG’s monitoring, INS activities included (1) collecting the appropriate criminal history records from the FBI, (2) sorting and categorizing these records, (3) matching (and filing) these records with the appropriate INS case file for the naturalized alien, (4) assigning case files to review adjudicators, and (5) ensuring that the case files were consistently reviewed and contained a standardized worksheet summarizing the results of the adjudicator’s review. Using a standardized worksheet, INS adjudicators reviewed the case files of these aliens and made independent judgments about the initial adjudication decisions. KPMG monitored the review adjudicators’ work. In addition to the 399 alien case files that INS could not locate, another estimated 300 criminal history records were not available for review and therefore were not included with the 80,856 criminal histories. The 300 criminal history records apparently had been in transit between the FBI and INS and were received too late to be included in the INS review. KPMG reported that INS’ preliminary assessment of the approximately 300 alien criminal history records was that most of these aliens had only old administrative arrests or were never naturalized. Furthermore, INS concluded that even if the case files for these aliens had been received in time for the review, very few of them would have been included in the INS review. To help ensure consistency among the INS review adjudicators in their decisionmaking, KPMG took a number of actions. These actions included the following: teaching the adjudicators how to complete the standardized worksheets in a consistent manner, checking the case files and standardized worksheets after the adjudicators’ reviews were completed, requiring a total review of all daily work from any adjudicators for whom significant errors in completing the standardized worksheets were found, requiring senior adjudicators to verify a sample of other adjudicators’ work each day, and identifying adjudicators’ recurring errors and providing additional guidance to those adjudicators to avoid the recurrence of the errors. In addition to the above actions, KPMG activities included (1) examining and categorizing each criminal history record and verifying that the record was part of the review, (2) safeguarding and securing files, and (3) promoting consistency of review adjudicator decisions by having discussions with the adjudicators when KPMG felt these discussions were needed. The INS adjudicators reviewed the case files of the 16,858 naturalized aliens with criminal history records that included records of arrests for felonies or other potentially disqualifying crimes to judge if the initial adjudications were proper. The review results were based only on the data in the case files at the time of the adjudicators’ reviews. In some cases, data may have been removed from or added to the INS case files after the initial decisions were made and before the files were reviewed. Also, although the adjudicators who made the initial decisions to approve the aliens’ naturalization applications had the benefit of discussing the naturalization applications with the aliens, the review adjudicators did not meet with the applicants. As shown in table 1, in its review of these 16,858 case files, INS designated each case as either “proper,” “requires further action,” or “presumptively ineligible.” According to INS officials, a case was designated as proper if the data in the case file supported the initial decision to naturalize the individual. A case was designated as requires further action if the data in the case file were insufficient to support a proper decision yet did not appear to indicate that the individual was barred from being naturalized. For example, some case files did not contain data about the dispositions of arrests that may have affected the individuals’ eligibility for naturalization. Cases involving a failure to disclose an individual’s criminal history were also classified as requires further action because the determination of whether the failure to reveal the criminal history affected the individual’s eligibility for naturalization required a legal determination that went beyond the scope of the INS review. A case was designated as presumptively ineligible if the data in the case file or the criminal history appeared to indicate that the alien should have been barred from being naturalized. INS is reviewing, for potential revocation, the 369 cases of those aliens who were judged to be presumptively ineligible as well as the 5,954 cases requiring further action. EOIR independently reviewed case files of previously naturalized aliens to provide quality assurance that INS’ decisions during the review were unbiased. EOIR reviewed a statistically valid sample of 557 alien case files from the universe of 16,858 cases involving aliens who had criminal history records. EOIR’s review was done separately from the INS adjudicators’ review. In conducting the review, EOIR teams of two staff each reviewed the alien case files at the Lincoln Service Center. The initial EOIR team received an orientation regarding the mechanics of properly completing the standardized worksheet. The lead EOIR staff member returned to the service center to provide the orientation to each subsequent team. The EOIR reviewers and the INS review adjudicators had the same decisions in 439 of the 557 cases (or 79 percent). Specifically, EOIR and INS independently judged that 288 cases were proper, 147 cases required further action, and 4 cases were presumptively ineligible (see table 2). The results for the 118 cases in which INS and EOIR reached different decisions were as follows: in 6 cases, INS judged that the aliens were presumptively ineligible, while EOIR judged that in 1 of these cases the initial adjudication decision was proper and in the other 5 cases further action was required by INS field units; in 40 cases, INS judged that further action was required by its field units, while EOIR judged that in 36 of these cases the initial adjudication decisions were proper and in the other 4 cases the aliens were presumptively ineligible; and in 72 cases, INS judged that the initial adjudication decisions were proper, while EOIR judged that in 68 of these cases further action was required by INS field units and in the other 4 cases the aliens were presumptively ineligible. Regarding the differences between the INS and EOIR decisions, KPMG reported that much of the naturalization process and the review of case file information required the reviewers to make subjective analyses. Therefore, according to KPMG, it was highly improbable that the reviewers would reach full agreement on all of the cases. KPMG stated that the major contributing factor to differences in INS’ and EOIR’s judgments was the interpretation of case file documentation regarding the applicants’ acknowledgment of prior criminal histories. KPMG added that, in many cases, EOIR and INS reviewers had to make subjective decisions as to whether sufficient case file documentation existed to justify their decisions. KPMG concluded that a 79-percent agreement rate between EOIR and INS reviewers was the most that could be reasonably expected when considering that the two groups worked independently, had varied backgrounds, and had to make many subjective analyses. KPMG provided no basis or analysis in its December report to support its conclusion that a 79-percent agreement rate was reasonable. We recognize the subjective nature of the reviews by the INS and EOIR reviewers (i.e., the reviewers had to interpret the data in the case files). We agree with the need to separate the two groups of reviewers to help enhance EOIR’s quality assurance role. However, consistent with accepted social science standards regarding training, it would have been helpful in reviewing and interpreting the results of their reviews if the two groups had received similar training. For example, before reviewing the case files, the INS review adjudicators received training on the standardized worksheet that they were to complete and received a training manual to help them complete the standardized worksheet. On the standardized worksheet, adjudicators were required to summarize the data in the aliens’ case files (e.g., arrest and conviction information) and evaluate the naturalization decision to be made regarding the alien—that is, proper, presumptively ineligible, or further action is required. The initial EOIR team was provided with an orientation and the lead EOIR staff member was responsible for providing the orientation to the other teams. However, the EOIR staff did not receive the same training provided to the INS review adjudicators even though they had to review the same files and complete the same standardized worksheet. Thus, the lack of such training may have contributed to some of the disagreement on the case files. For the 21 percent of the cases where INS and EOIR reviewers disagreed, the results were divided regarding which reviewer was more likely to judge that a particular naturalization was proper. For example, in 68 cases that INS judged were proper, EOIR judged that further action was required; in 36 other cases, EOIR judged them to be proper, but INS judged that further action was required. Although in these examples more of the INS judgments were in agreement with the initial adjudication, we could not conclude that a statistically significant difference existed between the INS and EOIR decisions. The agencies’ overall judgments produced generally similar conclusions about the percentage of the naturalization decisions that had been made properly. For example, INS and EOIR judged that 65 percent (360 divided by 557) and 58 percent (325 divided by 557) of the cases were proper, respectively, and both INS and EOIR judged that 2 percent (10 divided by 557 and 12 divided by 557, respectively) of the cases were presumptively ineligible. As previously discussed, INS is reviewing the 6,323 cases—that is, the 5,954 cases that INS judged as requiring further action and the 369 cases that INS judged the aliens to be presumptively ineligible—for potential revocation. However, INS initially did not plan any additional action regarding the 72 cases in which EOIR disagreed with INS’ judgment that the initial INS adjudicators’ decisions were proper, which left unresolved questions about the soundness of INS’ decisions in these cases. According to the INS attorney involved with the review of the 6,323 cases for potential revocation, INS did not know about the 72 cases. After we questioned what was being done with these cases, the attorney said that the 72 cases would be included with the 6,323 cases being reviewed. According to KPMG, JMD requested a list of the 72 cases, which KPMG provided on January 23, 1998. According to INS, it has located the 72 case files to be reviewed for potential revocation. In its December report, KPMG identified a number of conditions that may have had an effect on the accuracy and completeness of INS’ review of its initial naturalization decisions. KPMG could not quantify the degree to which these conditions may have affected the ultimate decisions that the INS review adjudicators reached. These limiting factors included the following: The primary source of naturalization information came from an INS data system (Central Index System) that often has been found to be inaccurate. The differences in the INS and FBI information systems made it difficult to compare the INS records of naturalized citizens with the FBI criminal history records on aliens. INS was unable to locate 399 case files at the time of its review. The case file documentation varied significantly among INS offices; therefore, the case file documentation cannot be relied upon to definitively determine if the naturalization occurred. The INS review adjudicators’ decisions may not be the same as the decisions they might have made in their home units for various reasons, such as multiple state penal codes with which the review adjudicators had little experience and criminal history records that had unclear descriptions of arrests and very often did not record the ultimate disposition of arrests. In our opinion, another limiting condition of the adjudicators’ review was their need to totally rely on the information in the case files. Information may have been added to or removed from the case files after the initial adjudication was made and before KPMG took control of the files. INS reviewed the case files of 16,858 aliens with criminal history records who had been naturalized between August 31, 1995, and September 30, 1996. Subject to the limitations KPMG and we identified, INS judged that the case files of 6,323 aliens did not have sufficient information to determine if naturalization was proper or contained information that the aliens may have been improperly naturalized. INS is reviewing these cases for potential revocation. To provide quality assurance that INS’ decisions during the review were unbiased, EOIR reviewed a statistically valid sample of 557 alien case files from the universe of 16,858 aliens. Our analysis showed that EOIR’s and INS’ overall judgments produced generally similar conclusions about the results—that is, the proportion of naturalization cases found to be proper, to require further action, and to be presumptively ineligible. However, for 72 of the cases that INS review adjudicators had judged were properly naturalized, EOIR staff judged that further action was required to decide whether the initial adjudications were proper or the aliens were presumptively ineligible. At the time of our review, INS initially was not planning any further action to judge if the naturalization decisions for the 72 cases were appropriate, thus leaving unresolved questions about the soundness of INS’ decisions in these cases. After we discussed the 72 cases with an INS attorney involved with reviewing the cases for potential revocation, he said that these cases are being reviewed with the other 6,323 cases. The overall approach KPMG employed to monitor INS’ judgments followed accepted social science standards. The standards KPMG used included (1) establishing procedures to ensure the appropriate collection and review of FBI criminal history records and the review of related alien case files, (2) promoting consistency in the judgments of INS adjudicators by providing training and using a standardized worksheet, and (3) identifying recurring adjudicator errors so that corrective action could be taken. In addition, KPMG’s report disclosed limitations in the study procedures followed and discussed conditions that may have affected the accuracy and completeness of INS’ review. KPMG concluded that the 79-percent agreement rate between INS and EOIR reviewers was the most that could be expected. Although KPMG did not disclose its basis for this conclusion, it seems reasonable to us that providing the EOIR staff with training similar to that provided to INS’ review adjudicators might have helped to reduce any differences in how the two groups reached their decisions. On February 19, 1998, we met with officials from JMD, INS, EOIR, and KPMG who represented those organizations responsible for the data discussed in our report and provided the views of those organizations. The officials represented the Director of the Management and Planning Staff, JMD; the Director, EOIR; the Commissioner, INS; and the Principal, KPMG. These officials agreed with our draft report, including its conclusions, and provided clarifying suggestions, which we included in this final report where appropriate. Our draft report contained a recommendation that the Commissioner of INS ensure that its Office of General Counsel follow through with its plans to analyze the 72 cases along with the other 6,323 cases that INS is reviewing for potential revocation. During our discussion with the officials, they said that INS is now taking action to review these 72 case files. Accordingly, we deleted the recommendation from this report. We are providing copies of this report to the Attorney General; the Commissioner, INS; the Director, EOIR; the Director, Management and Planning Staff, JMD; the Director, Office of Management and Budget; KPMG; and other interested parties. Copies will also be made available to others upon request. Major contributors to this report were James M. Blume, Assistant Director; Barry Jay Seltser, Assistant Director; James M. Fields, Senior Social Science Analyst; Ann H. Finley, Senior Attorney; Michael H. Little, Communications Analyst; and Charlotte A. Moore, Communications Analyst. If you need any additional information or have any questions, please contact me on (202) 512-8777. Norman J. Rabkin Director, Administration of Justice Issues The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reported on the Immigration and Naturalization Service's (INS) review of its case files of aliens who were naturalized between August 31, 1995 and September 30, 1996, and who the Federal Bureau of Investigation (FBI) had identified as having criminal history records, focusing on the: (1) results of the INS and Executive Office for Immigration Review's (EOIR) case reviews; and (2) approach used by KPMG Peat Marwick LLP to monitor INS' efforts to identify improperly naturalized aliens. GAO noted that: (1) after receiving criminal history records from the FBI, INS reviewed to case files of 16,858 aliens with records that included a felony arrest or conviction of a serious crime who were naturalized between August 31, 1995, and September 30, 1996; (2) INS reviewed these criminal history records and its case files in an attempt to judge if these aliens should have been naturalized; (3) in its review of these 16,858 case files, INS designated each case as either proper, requires further action, or presumptively ineligible; (4) INS designated 10,535 cases as proper, 5,954 cases as requires further action, and 369 cases as presumptively ineligible; (5) to provide quality assurance that INS' decisions during the review were unbiased, EOIR reviewed a statistically valid sample of 557 alien cases from the universe of 16,858 aliens; (6) EOIR and INS reached the same decisions in 439 (or 79 percent) of the 557 cases; (7) although there was a 21-percent disagreement rate between the INS and EOIR reviewers, GAO could not conclude that a statistically significant difference existed between the INS and EOIR decisions; (8) INS is reviewing for potential revocation the 6,323 cases that its adjudicators judged as requiring further action or presumptively ineligible; (9) although INS initially did not plan to review the 72 cases that EOIR's review indicated may also have involved improper naturalization decisions, an attorney involved in reviewing the 6,323 cases said that these 72 cases are being reviewed with the other cases; (10) in carrying out its monitoring responsibilities, KPMG used accepted social science standards; (11) the KPMG report: (a) established procedures to ensure the appropriate collection and review of FBI criminal history records and the review of related alien case files; (b) promoted consistency in the judgments of INS adjudicators by providing training and having the adjudicators use a standardized worksheet, and (c) identified recurring adjudicator errors so that corrective action could be taken; and (12) KPMG's report also: (a) disclosed limitations in the study procedures followed; and (b) discussed conditions that may have affected the accuracy and completeness of INS' review.
Mr. Chairman and Members of the Subcommittee: We welcome this opportunity to appear before you today to discuss three areas of concern raised by the Committee last summer in its fiscal year 1996 appropriations report on the Bureau of Alcohol, Tobacco and Firearms (ATF). Those concerns involved ATF’s (1) use of force, (2) effect on the number of licensed firearms dealers, and (3) compliance with legislative restrictions on maintaining certain firearms licensee data. Today, we are releasing reports that address the first two concerns—use of force and licensing of firearms dealers. With respect to the third concern, data restrictions, our work is ongoing. As agreed, therefore, we will summarize our findings related to one data system—ATF’s system for maintaining records of firearms licensees who have gone out of business. With regard to the use-of-force issue, you asked us to (1) identify and describe ATF’s policies for the use of deadly force, (2) determine how ATF conveys its policies to agents, (3) determine the reasons for and the extent to which ATF uses dynamic entry and the equipment used to accomplish these entries, and (4) determine whether ATF has complied with its procedures for investigating shooting and alleged excessive force incidents. Moreover, you asked us to compare these issues with the way that the Department of Justice’s Drug Enforcement Administration (DEA) and Federal Bureau of Investigation (FBI) address them. To place ATF’s use-of-force incidents in perspective, from fiscal years 1990 through 1995, ATF, on average, arrested about 8,000 suspects but was involved in fewer than 10 reported shooting or alleged excessive force incidents annually. In October 1995, the Department of the Treasury and Justice adopted deadly force policies for their component agencies that are uniform except for certain agency mission-specific provisions. Both policies provide that officers may use deadly force only when the officer has a reasonable belief that the subject of such force poses an imminent danger of death or serious physical injury to the officer or another person. the new policy. The 1988 ATF and 1995 Treasury policies were consistent in that both policies generally authorized the use of such force only when the law enforcement officer reasonably believed or perceived that there was an imminent threat or danger of death or serious physical injury to the officer or another person. The two distinctions were that (1) the 1995 Treasury policy refers to the use of “deadly force,” while the 1988 ATF policy referred more specifically only to the use of a “firearm” and (2) the 1995 Treasury policy allows for the use of deadly force only when the law enforcement officer has a “reasonable belief” that there is an imminent danger of death or serious physical injury, while the 1988 ATF policy allowed for the use of such force when the agent “perceives” such a threat. Additional discussion regarding these policies and distinctions, as well as those discussed below, is provided in chapter 2 (pp. 34 to 36) of our Use-of-Force report. In addition, the prior ATF policy was, with three distinctions, consistent with prior DEA and FBI policies. The prior ATF policy was consistent with prior DEA and FBI policies in that they generally authorized the use of deadly force only when the agents reasonably believed or perceived that there was a threat or danger of death or serious bodily injury to the agent or another person. The three distinctions were that (1) only ATF’s policy provided the additional restriction that the threat of death or serious bodily harm be “imminent”; (2) the ATF and DEA policies referred to the shooting of “firearms,” while the FBI policy used the term “deadly force”; and (3) the ATF policy used the term “perceives,” while the DEA and FBI used the terms “reasonably believes” and “reason to believe,” respectively. ATF conveys its deadly force policies to new agents through training. Our discussions with training officials, reviews of training materials and policies, and observations showed that the training provided new ATF agents to introduce them to the deadly force policies was consistent with the Treasury/ATF deadly force policies, and the types of training provided were consistent with the training provided to new DEA and FBI agents. from a law enforcement officer to one where a subject assaults an officer with the potential for serious bodily harm or death. The model also presents five corresponding levels of force that would be appropriate to respond to the subject’s level of threat. Those responses range from verbal commands when the threat is low to deadly force when the threat is high. Emphasis is placed on resolving situations with the proper level of force while recognizing that situations can escalate and de-escalate from one level to another. Once training is completed, ATF requires that the use-of-force policies are to be reiterated to agents throughout their careers at quarterly firearms requalifications and during tactical operations briefings. DEA and FBI officials said that their deadly force policies also are to be reiterated at firearms requalifications. Dynamic entry, which relies on speed and surprise and may involve forced entry, is one of several tactical procedures used by ATF to execute search and arrest warrants. Dynamic entry was a principal tactical procedure used by ATF, DEA, and FBI when serving high-risk warrants—those where ATF believes that suspects pose a threat of violence—and entry to premises was required. ATF statistics on suspects arrested from firearms investigations during fiscal years 1990 through 1995 showed that 46 percent had previous felony convictions, 24 percent had a history of violence, and 18 percent were armed at arrest. All ATF case agents, including those assigned to special weapons and tactics units, known as Special Response Teams (SRT), are to be trained in the dynamic entry technique. From fiscal years 1993 through 1995, ATF conducted 35,949 investigations and arrested 22,894 suspects. During this same period, SRTs were deployed 523 times, and SRT members were involved in 3 intentional shooting incidents, 1 of which—the Waco operation—resulted in fatalities. We reviewed the available documentation for all 157 SRT deployments for fiscal year 1995 and found that the dynamic entry technique was used almost half the time and was the predominant technique used when entry to a building was required. In none of the 1995 SRT dynamic entries did ATF agents fire their weapons at suspects. vests. In addition to the standard equipment available, SRTs have access to additional firearms, such as bolt-action rifles, and specialized tactical equipment, such as diversionary devices. Equipment used by SRTs is generally comparable to that used by DEA and FBI agents during similar operations. ATF’s procedures for reporting, investigating, and reviewing shooting and excessive force incidents, as revised in October 1994, are consistent with guidelines and/or standards recommended by the International Association of Chiefs of Police, the President’s Council on Integrity and Efficiency, and the Commission on Accreditation for Law Enforcement Agencies. For example, agents are required to immediately report shooting incidents to their supervisors, incidents are to be investigated by an independent unit, and certain reports are to be reviewed by a review board on the basis of the nature and seriousness of the incident. Overall, DEA’s and FBI’s procedures for reporting, investigating, and reviewing shooting incidents are comparable to ATF’s. Distinctions in the procedures include (1) DEA and FBI delegate some investigations to their field divisions but ATF does not and (2) DEA’s and FBI’s review boards include representatives from Justice—ATF’s review board does not include representatives from Treasury. Although ATF’s excessive force procedures are comparable to DEA’s, with one distinction relating again to delegation, they are distinct from those employed by FBI. ATF is to investigate allegations of excessive force first and—if warranted—refer them to Justice for possible criminal investigation. In contrast, FBI is to refer all allegations of excessive force to Justice for possible criminal investigation before investigating the allegations itself. had been reviewed by a designated headquarters unit. Our review also showed that ATF’s investigations of 38 reported shootings involving ATF agents firing their weapons at suspects found each to be justified and within the scope of its use-of-force policy. In addition, ATF’s investigations found that 18 of 25 reported excessive force allegations in three misconduct categories were unsubstantiated. Four investigations found evidence of some agent misconduct, two investigations were ongoing at the time of our review, and one was closed without action because ATF determined that there was no need for further review. Agents found to have engaged in misconduct received written reprimands and/or suspensions. Regarding recent declines in the number of firearms dealers, you asked us to (1) determine the extent and nature of the declines; (2) determine what factors contributed to the declines, including whether ATF had a policy to reduce the number of dealers; and (3) obtain the views of pertinent organizations on the advantages and disadvantages of reducing the number of dealers. Since reaching a high point in April 1993, the number of firearms dealerssharply declined by about 35 percent, from about 260,700 to about 168,400 dealers as of September 30, 1995—the lowest number since fiscal year 1980. This decline occurred nationwide and ranged from 23 percent in Montana to 45 percent in Hawaii. To provide a context for interpreting the recent decline, appendix II shows the number of firearms dealers in fiscal years 1975 through 1995. licensees when compared to previous years. Also, a large number of licensees voluntarily surrendered their licenses. Appendix III provides detailed data for fiscal years 1975 through 1995 on application and license activity for all categories of licensees. Our review showed that various factors collectively contributed to the decline in the number of dealers. First, in January 1993, ATF initiated a National Firearms Program, which consisted of several regulatory enforcement strategies, including a strategy to closely scrutinize applicants for federal firearms dealer licenses and the operations of licensees to ensure strict compliance with the Gun Control Act of 1968, as amended. Under this program, the number of ATF full field inspections of firearms dealers and licensees increased. According to ATF, several factors led to this increased enforcement strategy. These factors included rising violence associated with the illegal use and sale of firearms, national media attention on the ease of obtaining a firearms dealer license, and ATF data that indicated that many licensees may not have been engaged in a firearms business. As a result, the number of ATF full field inspections of all applicants for federal firearms licenses and the operations of all such licensees increased from about 19,900 in fiscal year 1992 to a high of about 27,000 in fiscal year 1993—the period during which the National Firearms Program was initiated. Furthermore, from 1993 to 1995, the number of ATF inspections generally averaged about 9 percent of the total licensees, compared to 7 percent and lower before fiscal year 1993. As a result of its increased inspections, according to ATF, about 7,600 firearms dealer licensees voluntarily surrendered their licenses in fiscal years 1994 and 1995, the only 2 years for which ATF collected such data. Under ATF’s National Firearms Program, when an inspection showed that a dealer was not “engaged in a firearms business” at the location shown on the license, ATF inspectors were to advise the dealer to voluntarily surrender the license before implementing a formal revocation action. In addition, ATF used telephone interviews, called preliminary inspections, in fiscal years 1993 through 1995 as a means of scrutinizing federal firearms dealer applicants. According to ATF, a substantial portion of the approximately 2,500 applications abandoned and 7,200 applications withdrawn by applicants during fiscal year 1993 was directly attributable to ATF’s preliminary inspections. A second factor contributing to the declines was an August 1993 memorandum from the President directing Treasury and ATF to take actions to ensure compliance with federal firearms license requirements. The President pointed out that there were over 287,000 federal firearms licensees (all categories), many of which he stated probably should not have been licensed because they were not engaged in a legitimate firearms business. A third contributing factor was the Federal Firearms License Reform Act of 1993, passed by Congress in late November 1993. This act increased the licensing fees for obtaining and renewing a federal firearms dealer license. A fourth contributing factor was ATF’s revisions to the licensing application process that were done in late 1993 and early 1994 in response to the President’s August 1993 memorandum. ATF significantly revised the application form by adding a number of questions and requirements for supporting information to help it determine whether applicants intended to engage in the firearms business. For example, ATF required applicants to (1) submit fingerprints and photographs of themselves, (2) furnish a diagram of the business premises where their firearms inventories were located, and (3) provide a description of their security system for safeguarding firearms inventories. In July 1995, ATF reduced the number of questions and amount of supporting documents required. requiring applicants for licenses to certify that their firearms business would comply with state and local laws. Finally, along with federal laws and administration actions contributing to the decline, the enforcement of state and local laws may have contributed to the reduction in the number of firearms dealers. These include licensing, taxing, and other business-related laws. Although ATF intensified its enforcement efforts, we found no evidence from our review of ATF documents and interviews with numerous ATF officials that ATF had a policy or sought to reduce the number of licensed dealers by some targeted number. Instead, we found that ATF’s strategy since 1993 had been to closely scrutinize firearms dealer applicants and licensees to ensure strict compliance with the Gun Control Act. While ATF had no policy to reduce the number of dealers to a targeted number, it recognized that its strategy of increased enforcement, along with the legislative actions, would likely result in a reduction in the number of dealers. We contacted officials from seven organizations to obtain comments on the advantages and disadvantages of reducing the number of licensed firearms dealers. Appendix IV contains the names and descriptions of the organizations, which represented the firearms industry, firearms consumers, law enforcement, and gun control interests. The officials from the seven organizations provided us with a variety of views on the advantages and disadvantages of reducing the number of firearms dealers. Their views generally concerned the effect that declines in the number of firearms dealers may have on crime, regulatory enforcement, and economics. Their views ranged from those who believed that by reducing the number of dealers there could be less crime and better monitoring of dealers to those who feared that dealer decreases would curb competition, raise prices, and limit the lawful availability of firearms. Along with these views, the officials from the seven organizations provided their views on the reasons for the declines in the number of firearms dealers, which confirmed the results of our analysis regarding factors contributing to the declines. licensee data. For these hearings, we agreed to focus on ATF’s Out-of-Business Records System and its role in the firearms tracing process. Specifically, our objectives were to (1) describe ATF’s overall firearms tracing process and, specifically, the Out-of-Business Records System and its role in the process; (2) determine the number and results of ATF’s firearms traces and the number of out-of-business records processed and used; and (3) determine whether the Out-of-Business Records System complies with legislative data restrictions. We also agreed to assess information on the Out-of-Business Records System that ATF supplied to one Subcommittee member. Detailed results and the scope and methodology of our review pertaining to ATF’s Out-of-Business Records System are included in appendix V. The Gun Control Act requires federal firearms licensees to maintain records of firearms transactions and make these records available to ATF under certain circumstances. Through the use of these records, ATF provides criminal firearms tracing services to law enforcement agencies. To perform traces, ATF needs to know the manufacturer and serial number of the gun. ATF’s National Tracing Center (NTC) traces the ownership of firearms by using documentation, such as out-of-business licensee records, which are maintained in ATF’s data systems, and/or by contacting manufacturers, importers, wholesalers, and retailers (i.e., firearms dealers). NTC’s objective is to identify the last known purchaser of the firearm. NTC considers a trace completed when it traces the firearm to a retail firearms licensee or purchaser or when it cannot identify the purchaser. From fiscal years 1992 through 1995, ATF received a total of about 263,000 trace requests. During this period, the number of trace requests ATF completed more than doubled, from about 43,000 in fiscal year 1992 to about 86,200 in fiscal year 1995. ATF completed a total of about 243,600 trace requests during this 4-year period. In about 41 percent of the completed trace requests, ATF identified a retail firearms licensee or purchaser of the traced firearm. regulations requiring firearms licensees that permanently discontinued their businesses to forward their records to ATF within 30 days following the discontinuance. The Firearms Owners’ Protection Act of 1986 codified this reporting requirement. Accordingly, since the enactment of the Gun Control Act, ATF has maintained the out-of-business records at a central location, which is currently at NTC in Falling Waters, West Virginia. Before fiscal year 1991, ATF maintained these records in hard copy. Performing traces by manually searching these copies was very time consuming and labor intensive. ATF also had storage space problems. In 1991, ATF began a major project to microfilm these records and destroy the originals. This system still resulted in time-consuming traces. In fiscal year 1992, using a minicomputer ATF created a computerized index of the microfilm records. The index contained information, including the firearm’s serial number and the firearms licensee number, to tell the tracing staff which microfilm cartridge to search and where on the cartridge the record was located. The indexed information that is captured by the minicomputer is then stored on a mainframe computer’s database to allow searches of the indexed information. Information, such as the firearm purchaser’s name or other identifying information, remains stored on the microfilm and is not computerized. ATF officials said all traces now begin with a query of the Out-of-Business Records System. During fiscal years 1992 through 1995, ATF received records from about 68,700 firearms licensees that went out of business. During this time, the number of licensees that went out of business more than doubled, from about 34,700 in 1992 to about 75,600 in 1995, and the percent of licensees that sent in their records increased by about three-fourths, from about 25 percent to about 43 percent. Also, during this period, ATF officials estimated that ATF microfilmed about 47 million documents contained in about 20,000 boxes. In addition, the officials estimated that ATF used the out-of-business licensees’ records to help complete about 42 percent of all completed trace requests during this period. 18 U.S.C. 926(a), prohibits ATF from issuing any rule or regulation, after the date of that act, requiring that (1) firearms licensee records (or any portion thereof) be recorded at or transferred to a federal, state, or local government facility or (2) any system of registration of firearms, firearms owners, or firearms transactions or dispositions be established. In a March 1995 letter to one Subcommittee member following hearings on Treasury’s fiscal year 1996 budget request, ATF described its maintenance and use of the out-of-business dealers’ records and explained that it believes these records are handled in compliance with the law.Specifically, ATF concluded that the storage and retrieval systems used for these records had been designed to comply with the statutory restriction relating to the establishment of a registration system for firearms, firearms owners, or firearms transactions or dispositions. We concur with this conclusion. Our detailed legal analysis is contained in appendix V. Furthermore, with regard to the operation of the Out-of-Business Records System, our review of ATF’s system documentation and discussions with ATF officials, along with our observation of the out-of-business records process at NTC, basically confirmed that ATF was operating the system in a manner consistent with the way it was designed by ATF and described in Treasury’s March 1995 letter. We found no evidence that ATF captures and stores the firearms purchasers’ names or other identifying information from the out-of-business records in an automated file. ATF provided oral comments on a draft of our testimony at a meeting with the ATF Director and other top-level officials on April 16, 1996. With regard to the use-of-force and firearms dealer licensee issues, the officials reiterated their previous comments on the respective reports, i.e., our presentation of the facts was accurate, thorough, and balanced. They also agreed with our findings and conclusions regarding the Out-of Business Records System and provided some technical comments, which we incorporated where appropriate. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions you or the other Subcommittee members might have.
GAO discussed the Bureau of Alcohol, Tobacco and Firearms (ATF), focusing on its: (1) policy on the use of force; (2) licensing of firearms dealers; and (3) compliance with legislative restrictions on maintaining certain firearms licensee data. GAO noted that: (1) between fiscal year (FY) 1990 and FY 1995, ATF arrested an annual average of 8,000 suspects and had fewer than 10 reported shootings or alleged incidents of excessive force; (2) ATF deadly force and training policies are consistent with Drug Enforcement Agency (DEA) and Federal Bureau of Investigation (FBI) policies; (3) when serving high-risk warrants, ATF, DEA, and FBI use dynamic entry, a tactic that may involve forced entry and is used to gain rapid entry to premises; (4) in none of the FY 1995 ATF Special Response Team deployments did agents fire their weapons and in about half, agents used dynamic entries; (5) ATF procedures for reporting, investigating, and reviewing shootings and alleged excessive force cases are consistent with DEA and FBI procedures; (6) ATF complied with its reporting, investigating, and reviewing procedures, determined that all ATF shootings were justified, determined that most excessive force allegations were unsubstantiated, and punished agents determined to have engaged in misconduct; (7) between 1993 and 1995, the number of licensed firearms dealers declined by about 35 percent due to increased ATF law enforcement and new licensing laws; and (8) the ATF firearms licensee data system complies with legislative restrictions.
Refineries process crude oil into petroleum products through a combination of distillation and other processes. A single barrel of crude oil produces a varying amount of gasoline, diesel, jet fuel, and other petroleum products depending on the configuration—or complexity—of the refinery and the type of crude oil being refined. This report focuses on the production of finished gasoline. Finished gasoline is primarily defined by three characteristics: blendstock, vapor pressure, and oxygenate content. Blendstock is the designation for the base gasoline produced so that other materials can be blended in to meet various air quality or other local specifications. Vapor pressure, also known as Reid Vapor Pressure (RVP), measures the gasoline’s evaporation characteristics or volatility. Oxygenates are fuel additives, particularly alcohols and ethers, which increase gasoline octane levels and reduce carbon monoxide pollution associated with automobile emissions. The most widely used oxygenate in the United States is ethanol, which may be added to gasoline in varying percentages. Federal regulations specify that no more than 10 percent ethanol can be blended into gasoline. Ethanol is generally blended with gasoline at the terminal or wholesale “rack”—the distribution center between refineries and retail fueling stations. For the purposes of this report, conventional gasoline does not contain special federal, state, or local blendstock, RVP, or oxygenate requirements unless otherwise noted, while “special fuel blends” refer to blends of gasoline that are designed to be cleaner burning and generally contain either a certain blendstock, RVP, or oxygenate requirement to meet federal, state, or local fuel specifications. An example of a gasoline used to meet a state fuel specification is California Air Resources Board (CARB) gasoline, which is designed to reduce harmful exhaust emissions that cause smog and is used exclusively in California. Petroleum product markets are evolving in part as a result of the increasing use of biofuels—fuels derived from plant or animal products— throughout the country. The Energy Policy Act of 2005 generally required that at least 7.5 billion gallons of biofuels be blended into motor vehicle fuels in the United States by 2012. These targets were later amended under the Energy Independence and Security Act of 2007, which increased the volume of biofuels to be blended with gasoline from 9 billion gallons in 2008 to 36 billion gallons in 2022. EPA was charged with implementing the Renewable Fuel Standard (RFS) program and issuing regulations to ensure that the annual volumes of biofuels specified by the legislation are being blended into motor vehicle fuels. In addition, some states require the use of biofuels. For example, in Minnesota all fuel must contain 10 percent ethanol, while a number of other states offer consumers incentives—such as tax credits and rebates—for purchasing ethanol or other biofuels. The steadily increasing use of biofuels in the United States has complicated the production and distribution of gasoline. Biofuels such as ethanol are produced at dedicated biofuel production facilities—not at refineries—and currently cannot be transported by most petroleum product pipelines. Therefore in order for ethanol to be blended with gasoline, it must be shipped to the terminal by truck or rail, where it is then mechanically mixed with gasoline as it is delivered into trucks for shipping to retail. Gasoline with or without biofuels is typically sold as either branded or unbranded. Branded gasoline is that supplied from major refiners and sold at retail stations under these refiner’s trademarks, and often contains special additives. Contracts for branded gasoline tend to be less flexible than contracts for unbranded gasoline but guarantee a more secure supply. Conversely, unbranded gasoline may be supplied by major or independent refiners, but is not sold under a refining company’s trademark. Buyers of unbranded gasoline may or may not have a binding contractual arrangement with a refiner. The supply infrastructure—which includes pipelines and terminals that hold supply inventories—is a critical component of the nation’s petroleum product market in that it facilitates the flow of crude oil and petroleum products from one geographic region to another. Crude oil pipelines connect several large refining centers to crude oil sources, and petroleum product pipelines connect these refineries to population centers all over the country. Thus, a disruption in one geographic region can affect the supply and prices in another geographic region. To help mitigate the effects of potential supply disruptions caused by refinery outages or sudden increases in demand and to facilitate smooth supply operations, refiners, distributors, and marketers of petroleum products maintain inventories of crude oil and petroleum products. Inventories represent the most accessible and readily available source of supply in the event of a production shortfall, such as one caused by a refinery outage, or increase in demand. In October 2008, we reported that unplanned and planned refinery outages across the United States did not show discernible trends in the frequency or location of outages from 2002 through 2007, with the exception of impacts beginning in 2005 related to Hurricanes Katrina and Rita. During that study, however, we found that EIA does not collect information on refinery outages directly and thus the information it collects on its monthly refinery survey and uses to indirectly estimate outages has a number of limitations. Specifically, EIA’s method of using EIA-810 data to estimate outages cannot distinguish between planned and unplanned outages, which could have different impacts on petroleum product prices for consumers. Also, as we reported, because the monthly refinery survey data are monthly aggregate data, major outages that straddle the end of one month and the beginning of the next may be difficult to identify and the observable effects of those outages could be diluted. We further reported that the exact date and length of an outage may be difficult to determine from EIA’s monthly refinery survey data, making it difficult to use the data to determine whether a specific outage had a significant effect on the production capacity for some petroleum products as well as market prices. Several U.S. agencies have jurisdiction over and monitor the U.S. refining and supply infrastructure industries and petroleum product markets. Within the Department of Energy (DOE), the Energy Information Administration (EIA) collects and analyzes data, including supply, consumption, and prices of crude oil and petroleum products; inventory levels; refining capacity and utilization rates; and some petroleum product movements into and within the United States. Much of the data that the agency collects is obtained by surveys under EIA’s Petroleum Supply Reporting System (PSRS). The PSRS is comprised of 16 data collection surveys and includes, among others, weekly and monthly surveys of refiners, terminals, and pipelines. The purpose of the PSRS is to collect and disseminate basic and detailed data to meet EIA’s responsibilities and energy data users’ needs for credible, reliable, and timely information on U.S. petroleum product supply. EIA generally updates its PSRS surveys every 3 years and has issued such updates in 2003, 2006, and 2009. EIA also conducts analyses in support of DOE’s mission and in response to Congressional inquiries. For example, EIA recently conducted its semiannual forecast of planned refinery outage effects. EIA evaluates a wide range of trends and issues that could have implications for U.S. petroleum product trends and markets, and each year issues a publication known as the Annual Energy Outlook. The Environmental Protection Agency (EPA), among other things, develops and enforces regulations that implement environmental laws that aim to control the discharge of pollutants into the environment by refiners and other industries. The EPA, with the concurrence of DOE, can grant waivers on fuel requirements that allow petroleum product markets to be more easily re-supplied should an “extreme and unusual” situation—such as a problem with distribution of supply to a particular region, a natural disaster, or refinery equipment failure—occur. In addition, EPA oversees the Reformulated Gasoline (RFG) program. This program was developed in response to a requirement in the Clean Air Act that cities with the most severe smog pollution use reformulated gasoline—gasoline blended to burn cleaner and reduce smog-forming and toxic pollutants in the air—to reduce emissions. EPA is also responsible for implementing and issuing regulations to ensure that gasoline sold in the United States contains a minimum volume of biofuels, such as ethanol or biodiesel, and its reports, according to EPA officials, are geared toward collecting data on fuel quality which is enforced at the refinery. Under EPA’s Renewable Fuel Standard (RFS) program, refiners, importers, and blenders are required to use a minimum volume of biofuels each year, determined as a percentage of the total volume of fuel the company produces, blends, or imports. Entities that are unwilling or unable to meet this percentage standard may purchase biofuel credits from other obligated parties in order to satisfy the requirement. EPA monitors RFS program compliance and has the authority to waive the standard if it determines that specified biofuel volumes would cause severe harm to the economy or the environment in a particular region, state, or the country or that there is an inadequate domestic supply. The Department of Transportation’s (DOT) Pipeline and Hazardous Materials Safety Administration focuses on pipeline safety and establishes standards for transmission and distribution systems for crude oil and petroleum product pipeline. Among other things, it oversees pipelines’ design, maintenance, and operating procedures to maintain the safe, efficient, and reliable delivery of petroleum products. The Federal Energy Regulatory Commission (FERC) monitors energy markets and regulates rates and practices of oil pipeline companies engaged in interstate transportation of natural gas, oil and electricity. It establishes and enforces the rates, known as “tariffs,” for transporting petroleum and petroleum products by pipeline. While it can be expected that some refinery outages have quite large price effects, the results of our analysis found that on average refinery outages were associated with small increases in gasoline prices. Based on our analysis of wholesale prices across 75 U.S. cities from 2002 through September 2008, planned outages generally did not influence prices, while unplanned refinery outages had generally small wholesale gasoline price effects in the cities they serve. Price increases varied depending on whether the gasoline was branded or unbranded and according to the gasoline type affected by the outage. On rare occasions, refinery outages can have large temporary effects on gasoline prices. For example, as we recently testified, petroleum product prices increased dramatically following Hurricanes Katrina and Rita. This occurred in part because many refineries are located in the Gulf Coast region and power outages shut down pipelines that refineries depend on for crude oil supplies and to transport refined petroleum products, including gasoline to wholesale markets. DOE reported that 21 refineries in affected states were either shut down or operating at reduced capacity in the aftermath of the hurricanes. In total, nearly 30 percent of the refining capacity in the United States was shut down, disrupting supplies of gasoline and other products. Two pipelines that send petroleum products from the Gulf Coast to the East Coast and the Midwest were also shut down as a result of Hurricane Katrina. For example, Colonial Pipeline, which transports petroleum products to the Southeast and much of the East Coast, was not fully operational for a week after Hurricane Katrina due to large-scale power outages and flooding. Consequently, according to the Federal Trade Commission, average gasoline prices for the nation increased 45 cents-per-gallon between August 29 and September 5, 2005; short-term gasoline shortages occurred in some places; and the media reported gasoline prices greater than $5 per gallon in Georgia. The hurricane came on the heels of a period of high crude oil prices and a tight balance worldwide between petroleum demand and supply, and illustrated the volatility of gasoline prices given the vulnerability of the gasoline infrastructure to natural or other disruptions. While extreme outages can cause large temporary price increases, such events were relatively uncommon during the period of our analysis. For example, for unbranded prices, of the approximately 1100 unplanned outages we evaluated, 99 percent of the time they were associated with wholesale price increases of no more than about 32 cents-per-gallon, and 75 percent of the time they were associated with price increases of less than 6 cents-per-gallon in the cities affected. Overall, our analysis indicated that planned outages—where refineries temporarily shut down to perform routine maintenance or equipment upgrades—generally did not have a significant effect on wholesale gasoline prices. As we reported in October 2008, planned outages are typically scheduled during periods of less demand and interspersed among refiners and refineries. In addition, the equipment and labor are generally booked months—or even years—in advance, and can be arranged with those customers with whom the refiners have long-term contracts at a cost less than would be required in an emergency or unplanned situation. Industry representatives told us that because a refinery must draw on a limited number of equipment makers and skilled laborers, the refinery’s plans for maintenance eventually become public knowledge. In this case, the market “expects” the outage to occur, therefore planned outages do not generally trigger significant price responses, unless something unexpected occurs or the market is disrupted elsewhere. Furthermore, refineries stockpile petroleum products in preparation for planned outages and therefore do not experience the same shortage of production materials experienced during unplanned outages. Unplanned outages, on the other hand, were associated with gasoline price increases but these increases were generally small and depended on key factors, including whether or not the gasoline was branded or unbranded and the type of gasoline being sold. With respect to the distinction between branded and unbranded gasoline, our analysis showed that in the event of an unplanned refinery outage, unbranded gasoline was generally associated with greater wholesale price increases than branded gasoline. Specifically, we found that for conventional gasoline—the most common and widely available gasoline blend—unbranded gasoline had an average 0.5-cents-per-gallon increase in price associated with unplanned refinery outages, while branded gasoline had a smaller—about 0.2-cents- per-gallon—increase. The price effects observed in these cases reflect an average increase in prices at the wholesale terminals in the 75 cities over the study period. These results suggest that—as some traders and other market participants have told us—during disruptions, refiners generally choose to give priority in supplying those customers with whom they have long-term supply contracts, which typically are for branded gasoline. Therefore, in such conditions independent marketers—which typically sell unbranded gasoline—may be forced to pay higher prices to obtain product to sell. On the other hand, industry experts told us that unbranded sellers may be able to buy wholesale gasoline at lower prices than branded sellers during normal market conditions. With regard to the type of gasoline fuel blend being sold, our analysis shows that the price increases associated with an unplanned refinery outage were significantly greater for 8 of the 19 “non-base-case” gasoline types we identified than our “base case” conventional clear gasoline, while the price increases for other gasoline types were generally about the same as those of conventional gasoline. In our analysis, we selected conventional gasoline as our base case and used our model to determine whether there were significant differences between this base case and other fuel types with respect to the relationship between unplanned refinery outages and price changes. We looked at 19 other non-base case fuel types that were in use in the 75 cities we reviewed. We compared the results of these 19 other fuel types to our conventional gasoline base case and measured the price differences. The price increases associated with unplanned refinery outages for various branded and unbranded gasoline types that were higher than our conventional gasoline base case are shown in table 1. The results suggest that some special fuel blends that include such characteristics as unusual oxygenate requirements, lower RVP requirements, or unusual oxygenate/RVP combinations may be more sensitive to unplanned outages than other special fuel blends. For example, for unbranded gasoline, the prices of some special fuel blends— such as CARB, conventional gasoline with oxygenate formulations such as 5.7 percent ethanol, or uncommon oxygenate/RVP formulations such as conventional gasoline with 10 percent ethanol and a 7.0 RVP—were more sensitive to unplanned refinery outages than conventional gasoline without such specifications. Specifically, the largest price differences between our conventional gasoline base case and special gasoline blends, were for CARB without oxygenate and conventional gasoline blended with 10 percent ethanol and a 7.0 RVP. In these instances, prices were about 10- cents and 8-cents-per-gallon higher than our base case. The results show that the prices of unusual oxygenate/RVP combinations that are not commonly produced at most refineries may be more sensitive to unplanned outages than conventional gasoline, which can be more readily re-supplied to a city experiencing an outage. Our analysis also shows that a number of other special fuel blends did not experience significant price increases associated with unplanned refinery outages above that of conventional gasoline, although the fuel types affected depended partly on whether the gasoline was branded or unbranded. These fuel types and the locations that require them are shown in table 2. Finally, it should be noted that individual outages may have different effects on prices depending on a variety of factors beyond those discussed above. As discussed previously in this report, and in work by EIA and the California Energy Commission, under certain conditions—such as low inventories, high seasonal demand, certain special fuel requirements, and geographic conditions that may hinder easy re-supply to the market—an unplanned refinery outage could be expected to result in a price surge in some cases. However, in some cases, unobserved factors can mitigate the effects of outages, or even cause prices to fall, making it appear as if the outage caused prices to fall. For example, a large shipment of a particular special fuel blend located just offshore or beyond the Canadian border could be a significant source of re-supply in the event of a disruption. In addition, while our analysis examined the effect of about 1,100 unplanned outages and 1,000 planned outages, our model did not differentiate between the types of refinery equipment that went out of service, which could have varying effects on wholesale gasoline prices. For example, an unplanned outage of a fluid catalytic cracker—a type of processing equipment used to maximize the production of gasoline—could be expected to have a more significant effect on wholesale gasoline prices than an unplanned outage on a piece of equipment—such as a certain type of hydrotreater—that is designed to maximize production of distillates such as diesel fuel or heating oil. Because our model does not distinguish between the type of unit experiencing an outage, our results show average impacts across different types of refining units, which means we tend to underestimate the effect of an outage at a unit such as a fluid catalytic cracker, and overestimate that of a non-gasoline producing unit. Existing federal data contain gaps that limit analyses of refinery outages on petroleum product prices and in some cases do not reflect emerging trends—although agencies continue to take steps to improve their data collection. These data gaps created challenges to our, and another federal agency’s, analyses and ability to respond to Congressional inquiries. Specifically, we were limited in this report in our ability to fully evaluate 1) the price effects of unplanned outages at individual cities and 2) a city’s gasoline re-supply options in the event of an outage. Our ability to fully evaluate the price effects of unplanned outages at individual cities—for example, price effects in Atlanta, Georgia associated with outages related to Hurricanes Ike and Gustav—was limited because federal data do not link refiners to the cities they serve. Although federal data exist regarding most refinery activities, the refiner-to-market link contains key gaps. While EPA’s annual reformulated gasoline area report requires each refinery to identify the cities the refinery believes it supplies with reformulated gasoline, this reporting is limited to reformulated gasoline. As such, the reports do not capture the estimated refiner-to-city link for a majority of gasoline types—including conventional gasoline and special fuel blends—sold in the United States. Further, the reports are not intended to identify the quantities of gasoline distributed. EIA’s monthly refinery survey, the EIA-810, collects data regarding the volume of certain petroleum products being produced at refineries, including gasoline and unfinished gasoline blending components, but does not distinguish among all types of gasoline, such as premium versus regular or summer versus winter RVP, or identify which cities refineries serve. Our ability to identify a city’s gasoline re-supply options in the event of an outage was also limited because of gaps in federal pipeline flow data. Although we identified flow data collected at three agencies, the data were of limited use because they did not show the volumetric entry, flow, and exit of specific petroleum products through the pipeline. These specific data are important to understanding which refiners can and cannot supply various cities in the event of an outage and thus can be used to help determine potential price impacts. FERC’s quarterly reports by pipeline operators specify the number of barrels of petroleum products pipeline companies transport, but these data do not identify the entry and exit points of petroleum products along the pipeline infrastructure system, or the specific type of fuels transported. EIA’s monthly pipeline survey collects data on pipeline shipments between Petroleum Administration for Defense Districts (PADD)—a geographic aggregation of the 50 states and the District of Columbia split into five districts—as well as pipeline inventories by PADD. However, data at the PADD level do not correspond to particular cities and therefore the data cannot be used to identify the states and/or cities in which petroleum product flows originate and terminate. DOT’s annual report on hazardous liquids collects pipeline flow data, but DOT officials told us, and we also found, that these data are highly aggregated and the annual collection of information is too infrequent to be informative in many cases. Further, these data are not designed to show the discrete movement of petroleum products through the pipeline infrastructure. To help address these gaps in federal data, we purchased commercial data for our analysis from the energy consulting company Baker & O’Brien (see app. I). These data estimate the quantity flows of gasoline and other petroleum products produced at most U.S. refineries and transported to those U.S. cities that make up the main markets for these products. While we found the Baker & O’Brien data to be sufficiently reliable for the purposes of our analysis, these data are estimates only. Although we determined the commercial data that we purchased to perform our analyses were sufficient to describe the wholesale price impacts associated with refinery outages on various gasoline types, the data were not sufficient to accurately estimate the effects experienced by individual cities. Further, the comprehensiveness of the data we purchased was limited in part because private companies do not have the same ability as the federal government to require refiners to provide comprehensive and accurate information. Similar gaps in federal data also limited a recent effort by another federal agency to fully address Congressional concerns regarding potential pipeline constraints and agency concerns regarding refinery outages. In a January 2009 Congressionally mandated study to identify potential pipeline infrastructure constraints, DOT was unable to fully address the study’s objectives due to the lack of appropriate federal pipeline flow and petroleum product storage data. In its report, DOT noted that “a need exists to develop more robust metrics for such (pipeline flow) measurements.” The report also stated that “reliable data on storage facilities is sparse” and emphasized the need for additional data on oil and petroleum product storage terminals, including the location, size, and volumetric capacity of existing facilities to assess whether stored petroleum products are sufficient to mitigate supply disruptions. In addition, the study noted that additional data regarding the changing location and arrangement of petroleum product pipelines would be necessary to evaluate volumes of petroleum products transported. DOT concluded that an analysis sufficient to address Congress’s directives in the 2006 law would require further quantitative and analytical modeling. In particular, DOT officials told us the federal interagency effort to collect data would need to result in more comprehensive data—including volumetric pipeline entry, flow, and exit information—as well as more reliable storage terminal and inventory data in order to more fully assess the current and future reliability of the nation’s pipeline infrastructure and ability to respond to market disruptions. The absence of key data also limits the ability of federal agencies to monitor the effect of emerging trends such as the use of biofuels—for example, ethanol—in petroleum product markets. Specifically, we found that gaps in federal data do not allow agencies to track where gasoline blended with ethanol ultimately winds up in the fuel stream. Not having this information may be at odds with consumer’s interests. Since, according to EPA, a gallon of ethanol contains two-thirds the energy of a gallon of gasoline, when gasoline blended with ethanol is sold in areas with no ethanol or oxygenate requirement, consumers may be purchasing fuel that provides fewer miles-per-gallon without being aware of it. Our analysis of gasoline sales data shows that from 2002 through 2008, conventional gasoline blended with ethanol had been sold in areas with no ethanol or other oxygenate mandates in at least 32 states. Agency and industry officials told us that as the volume of biofuels to be blended with gasoline continues to grow to 36 billion gallons in 2022, ethanol will increasingly be distributed in locations that do not have requirements for oxygenate content. Despite these gaps in federal data, individual agencies have generally continued to take steps to update their data collection surveys to meet their respective agency objectives or needs, and have often coordinated to more efficiently obtain petroleum product data needed for a variety of purposes at multiple agencies. In 2009, EIA began collecting data regarding the production, stocks at production facilities, sales for resale, and end-use sales of biodiesel fuel. Also, three existing EIA forms were expanded to collect biodiesel imports and biodiesel blending and stocks at terminals and refineries. Our work indicates this new survey will help analysts identify how and where biodiesel is being used, a key emerging trend in the petroleum industry. In addition, these data will be used by EPA to help monitor the volumes of biofuel use specified in the RFS. Effective January 2009, EIA consolidated reporting of inventory information at refineries, pipelines and terminals from two surveys to one. This action will permit a more detailed and reliable analysis of petroleum product terminal operations and provide a baseline for the volume of petroleum products at various terminal locations that can potentially re- supply a city in the event of a major disruption. While this partially addresses our need to have federal data that shows the re-supply options in the event of a disruption, it neither shows the refiner-to-market link nor does it provide detailed batch information on petroleum product flows that would facilitate future analyses. Comprehensive inventory information may be particularly useful to DOT should it be tasked with completing another study to identify potential petroleum product infrastructure constraints. EPA officials told us they have worked with the Department of Agriculture and DOE in recent years regarding the recently issued 2007 Renewable Fuels Standard program guidance. The aim of such guidance is to monitor biofuel use—a key emerging market trend—and monitor compliance with biofuels specified in the RFS. Nonetheless, in some cases the individual agency efforts have resulted in the collection of information that does not necessarily meet the data needs of other agencies or analysts who monitor petroleum product markets. For example, federal reporting efforts have evolved such that EIA maintains primary responsibility for collecting information on total gasoline supply, including gasoline blendstocks, while EPA maintains primary responsibility for capturing another key characteristic—RVP—of certain gasoline blendstocks. Specifically, EIA’s surveys are structured to collect data on total gasoline supply, including blendstocks, on a monthly basis, whereas EPA collects RVP information on each batch of reformulated gasoline on a quarterly basis, and for all conventional gasoline supplied by a particular refiner on an annual basis. This means that companies report key information regarding gasoline components to two different federal entities, and analysts who need information regarding the blendstock and RVP of gasoline must go to two federal entities to obtain what is available; in addition, the data are not comparable in terms of periodicity. Finally, as described earlier, three different agencies collect a limited amount of pipeline flow data to meet their specific agency’s objectives, but collectively these data do not allow analysts to fully monitor the flow of petroleum product markets. This limited not only our ability to identify a city’s gasoline re-supply options in the event of an outage in this analysis, but also DOT’s efforts to fully address a Congressional mandate. In sum, these separate pieces of data do not come together to form a complete picture of current petroleum product markets. To the extent reasonable, the collection of petroleum product data by federal agencies should allow these and other agencies and analysts to form a clear picture of U.S. petroleum product markets while minimizing the government’s costs of collecting and maintaining, as well as the costs to industry of providing, these data. In our work we identified gaps in public data, some of which we could address by purchasing privately collected data, and some of which led to limitations to what our analysis could address. Specifically, we were unable, with publicly available data, to identify which refiners serve various cities across the country, and by extension, which refineries produce special fuel blends designed to meet federal, state, and local requirements. While the available public data, along with the commercial data we purchased, allowed us to analyze the broad impacts of refinery outages on various gasoline types on average; during the initial week of the outage, the data were not sufficient to determine the effects at individual cities. We also found an absence of some data on emerging market trends in biofuels that is troubling, given the rapid expansion of biofuel production and use in recent years. Some data gaps we identified may exist because data collection efforts generally reflect individual agency needs and, thus, may not necessarily consider the broader needs of other federal agencies and analysts. We recognize that agencies have a primary responsibility to perform their individual missions and that these agencies face their own specific budgetary constraints. However, we note the importance of each agency acknowledging that the collection of individual pieces of federal data contributes to a larger data universe and taking reasonable steps to ensure that the totality of these data allow for meaningful understanding and oversight of petroleum markets. In addition, agencies must be conscious of efficiency by considering the costs associated with gathering and maintaining data. Improving the usefulness and completeness of publicly held data—as well as reducing the associated costs—will require that each agency be aware of the part of the overall data picture it is responsible for, as well as the usefulness of these data beyond the immediate agency mission. Continued and improved coordination between such agencies, including EIA, EPA, DOT, and FERC, could improve the collective understanding and oversight of the refining industry and petroleum product markets. To evaluate existing, publicly held petroleum products market data and to determine if they are sufficient to meet the current and expected future missions and needs of the Congress, federal agencies, and other public and private stakeholders, we recommend that the Administrator of the EIA convene a panel comprised of agency officials from EIA, EPA, DOT, FERC, and other relevant agencies, industry representatives, public stakeholders, and other analysts and data users, to collect these data and develop a coordinated interagency data collection strategy. The panel should: assess the costs and benefits of collecting more systematic information about which refiners serve which cities and more discrete reporting of the volumetric entry, flow, and exit of petroleum products through the pipeline infrastructure system; identify additional data that would be useful to track and evaluate emerging market trends—such as the proliferation of biofuels and special blends—and assess the costs and benefits of collecting such data; identify opportunities to coordinate federal data collection efforts so that agencies can respond fully to Congressional requests and meet governmentwide data needs to monitor the impact of petroleum product market disruptions; and identify areas where data collection is fragmented—such as multiple survey instruments collecting similar information—to determine if those efforts can be consolidated and modified to enhance the overall usefulness and improve the efficiency of collecting and reporting these data. We provided a draft of this report to the Department of Energy (DOE) and its Energy Information Administration (EIA), the Environmental Protection Agency (EPA), and the Department of Transportation (DOT) for review and comment. DOE’s EIA agreed with our recommendations and provided additional comments regarding the recommendations and the report’s discussion of data gaps, which are summarized below. EIA also provided technical and clarifying comments, which we incorporated as appropriate into the report. EPA and DOT provided only technical comments, which we also incorporated as appropriate. Regarding our recommendations, EIA stated that it supports the recommendations, including our specific suggestions to review data adequacy, strengthen interagency coordination of data collection and use, and fully engage government, industry and public stakeholders. EIA stated that it believes it has a strong program to address all of these suggested actions, and is working closely with other federal entities through established joint programs, as well as informally to coordinate data collection. For example, the agency noted it has been working with an interagency group comprised of 40 federal agencies to facilitate the development of a trade processing system for U.S. Customs and Border Patrol. In commenting on the report’s discussion of data gaps, EIA stated it agrees that a review of possible data gaps is necessary and noted that it is currently—as of July 2009—reviewing the adequacy and quality of currently collected and commercially available refinery outage information. The agency believes, and we agree, that the adequacy of refinery outage data for analysis is one that EIA has taken seriously. To this end, EIA noted it published Federal Register notices on December 9, 2008, and February 28, 2009, informing the public of the agency’s intended review of refinery outage data. EIA plans to complete its review and provide its recommendation regarding additional government data collection this fall in its mandated semiannual refinery outage study. EIA stated it then plans to publish its analysts’ assessment and recommendations to solicit the broadest possible comment. At that time EIA will consider the use of a panel of government, industry, and public stakeholders—as we suggested—to determine its future steps. We support EIA’s efforts to address data issues and believe that its current plans are a step in the right direction toward ensuring that the best data are available to help achieve its mission of producing independent and unbiased research to help the Congress, public, and international community better understand energy markets and promote sound policy-making. We are sending copies of this report to interested Congressional committees; the Administrator of the Energy Information Administration, the Administrator of the Environmental Protection Agency; the Secretary of the Department of Transportation; and other interested parties. This report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are acknowledged in appendix III. We addressed the following questions during our review: (1) How have refinery outages affected U.S. wholesale gasoline prices since 2002? (2) To what extent do available federal data allow for the evaluation of the impacts of refinery outages on petroleum product prices, and do these data reflect emerging trends in petroleum product markets that may be important to future analytical needs? For the purposes of this report, we define the various types of outages as follows: Planned outages are p processing units or possibly the entire refinery to perform maintenance, inspection, and repair of equipment or to replace process materials and equipment that have worn out or broken, in order to ensure safe and efficient operations. eriodic shutdowns of one or more refinery Unplanned outages are events where an entire unit or refinery must be brought down immediately and without advance notice and are caused by unplanned circumstances such as a fire or a power outage. To determine trends in refinery outages over the time period from 2002 through September 2008, we purchased data from Industrial Information Resources, Inc. (IIR) that contained detailed information on refinery outages, including the estimated dates of the outages, whether the outages were planned or unplanned, and the amount of reduced production capacity due to each outage. We evaluated the data and found they provide reliable estimates of outages from 2002 onward. In our analysis, we counted an outage event as the halting of production capacity on any piece of equipment at the refinery; where multiple units such as a crude distillation and one or more secondary processing units were simultaneously down, we counted this as a single outage event in our model. To evaluate how refinery outages have affected U.S. wholesale gasoline prices we obtained and analyzed data from Energy Information Administration (EIA)’s monthly refinery production survey form, EIA-810, from 2002 through 2006, and other EIA surveys. We also purchased (1) data that included detailed information on refinery outages between 2002 and 2008 from Industrial Information Resources, Inc. (IIR), a private company that provides research and forecasts for various large industries; (2) data estimating the quantity flows of gasoline and other petroleum products produced at most U.S. refineries’ and transported to those U.S. cities that make up the main markets for these products from Baker & O’Brien, an energy consultancy company whose software is licensed to of the 10 top U.S. refining companies; and (3) weekly wholesale price da for 75 U.S. cities gasoline markets from Oil Price Information Service, a private company that provides pricing and othe “rack” level. We determined that these data were sufficiently reliable for the purposes of this report. We used the Baker & O’Brien quantity estimates to measure the proportion of each city’s product that is generally supplied by a particular refinery. We developed, and extensivel tested, an econometric model that examined the statistical relationshi between refinery outa o their market region and that lasted at least 3 days, (2) had a correspondi market city in the Baker & O’Brien data, and (3) for which we had useful and complete gasoline price data at the wholesale terminal l m which time we assumed that petroleum products were supplied from an alternate source. As a result, our analysis evaluated the short-term effects ed. of outages but did not evaluate the length of time those effects occurr In our model, we incorporated data on numerous factors that could affect gasoline prices—such as gasoline inventory levels and gasoline specifications—in order to rule out, or “control” for their effects on prices. e Because we were able to control for these other factors, we believe w were able to isolate the impacts of outages on prices given the inherent issues with the various datasets. There were some factors that pote affected gasoline prices over time and city-specific information could not include, although we were able to use econometric techniqu to contro factors that affected gasoline prices, we were able to estimate the average impact of outages on wholesale gasoline prices. The statistical significan of our findings are noted throughout the report. Although we focused our study on wholesale prices, we cannot be certain that the price effects at the retail level would be the same, although some research has shown th wholesale price changes are generally passed on to the retail level. In r data at the wholesale or ges and gasoline prices. We limited our analysis to utages that (1) were determined to be of the largest 60 percent within odel, we limited the effect of an outage on prices to one week, after l for some of these factors. After controlling for the additional at developing our model, we consulted with a number of economists and incorporated their suggestions wherever possible. Finally, we performed an analysis to test the robustness of our model, including changing vari assumptions regarding the model in order to ensure that our results were not highly dependent on any single specification of the model. To assess the extent to which available federal data allow for the evaluation of the impacts of refinery outages and determine whether the data reflect emerging trends in petroleum product markets, we revieweddata collection instruments from federal agencies—including EIA, Environmental Protection Agency (EPA), the Federal Energy Regulatory Commission (FERC), and the Department of Transportation (DOT)—andreviewed them for comprehensiveness, utility, accessibility, and potentialgaps or limitations. In addition, we reviewed past GAO and other federa agency or intergovernmental agency studies on refined product markets to identify data gaps, limitations, or inconsistencies. Finally, we interviewed key industry, expert institution, and academic representatives regarding data limitations and utility in their own work and what other data concerns or needs they might have for future analyses. Our work was not comprehensive evaluation of all federal energy data, but rather, an assessment of key data GAO used in this and past reports, and select othe data that were determined during the course of our review to have posed limitations for GAO’s or other agencies’ evaluations of important questions. We conducted this performance audit from October 2008 through July 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We developed an econometric model to explain the impact of refinery outages on gasoline prices. Our model controlled for as many contributing factors as possible, however, there were not always sufficient data available to control for all possible factors affecting wholesale gasoline prices. Our model examined how wholesale gasoline city rack prices were affected in the week during which a large unplanned refinery outage occurred. We examined weekly average data on wholesale city rack gasoline prices. We used data from 75 wholesale city racks from January 2002 through September 2008. We believe that the increased information from higher frequency data—for example, by using daily data—would be outweighed by the extra noise generated by such relatively high frequency data. Further, using lower frequency data, such as monthly data, runs the risk of obscuring some of the less extended but important effects of unplanned outages on gasoline prices. Another limitation of our analysis is that, in some cases, our data series for the control variables, described below, are generally available only on a monthly basis, in which case these values are assigned to the corresponding weekly observations. We consulted with government and academic experts to help develop our econometric model. Our variable of interest was the price of gasoline, specifically the wholesale rack price of gasoline. Our dependent variable was the logarithm of the wholesale city rack price of gasoline. Note that we include a time dummy variable for every time period so we do not have to deflate the wholesale price by a price index such as the producer price index or the price of crude. We used an Augmented-Dickey-Fuller test designed for panel data to test for stationarity in levels of our dependent variables, in the case of both unbranded and branded prices. Our tests showed that our unbranded and branded dependent variable was stationary in levels. We examined separate models for unbranded and branded products to test for the consistency of our results. There may be multiple gasoline prices reported for a given city rack on a iven date. In general, we used the wholesale rack price of gasoline that is g required in that specific city because we were interested in determining whether areas with non-standard gasoline specifications experienced larger gasoline price increases when a refinery that supplied their particular specification had an outage. By including a complete set of time dummy variables–one for each week’s observation in the data–our model controlled for factors that vary only over time (and are invariant across cities), such as the national average price level, the price of crude oil, and seasonal effects. Explanatory Variables—Measuring the Impact of an Outage on Gasoli Prices Our primary interest was to examine the impact of refinery outages on gasoline prices. There are two key issues: 1. Identifying an outage. We acquired data on outage occurrenc IIR. These data provide inform whether the outage was planned or unplanned, the date of the outage, the duration of the outage, and the capacity of the unit that was offline due to the outage. ation about the outage, including 2. Measuring the impact of a given outage on a particular city. Fo each city, we estimated the proportion of its product that it generally received from each refinery; a city may be served by one or more refineries. Our measure of an outage’s impact that was generally supplied by the refinery (or refineries) experiencing an outage. If a city was generally estimated to receive no product from the refinery experiencing the outage, then the effect was zero, the explanatory variable was zero, and the refinery outage had no impact on that city’s gasoline price. Alternatively, if, for example, a city received 20 percent of its product from said refinery, the explanatory variable had a value of 0.20 for that time period. It is also possible that a single city may have been impacted by more than one refinery outage at the same time, so in that case we would sum these effects. For instance, if in addition to the 20 percent impact example above, there was an outage at a refinery supply 10 percent of the city’s product, the explanatory variable would take a combined value of 0.30. was the proportion of a city’s product In addition to the impact of outages, our model includes other important variables that may influence the price of gasoline. Volume of inventory of gasoline relative to the volume of sales of gasoline. This could affect the availability of gasoline at the wholesale level and hence affect prices. Prices should decrease when inventories are high relative to sales and should rise when inventories are low relative to sales. However, inventories and sales may themselves respond to changes in wholesale gasoline prices, so this variable may be endogenous. Capacity utilization rate. This could affect the wholesale price of gasoline through changes in the availability of gasoline product. One possibility is that, when utilization rates are high, there would be more gasoline available, which would tend to lower prices; conversely if utilization rates are low, less gasoline would be available, wh ich would tend to raise prices. However, it is possible that as utilization rates approach very high levels, there are significant increases in cost of production, which could then result in higher prices. Further, capacity utilization may react to changes in gasoline prices, so it is possible that this variable is endogenous. Market concentration. Markets with fewer sellers of product or that are more highly concentrated, may be associated with higher gasoline prices However, the direction of effect may run the other way too, such that markets with higher prices may attract entrants, which may reduce the level of market concentration. We treat market concentration as an endogenous variable. . Lagged dependent variable. Our model includes lagged values of the lef hand side variable; namely, the logarithm of the wholesale price of gasoline. Gasoline price data may be serially correlated and it is reasonable to include the effect of past gasoline prices on current gasoline prices. Time fixed effects. We included a dummy variable for each time period in the analysis. City fixed effects. We included a dummy variable for each city in the analysis. These city fixed effects may assist in controlling for unobserved heterogeneity. Product specification. We included a dummy variable for each of the different types of gasoline used in our model. Interaction between the product specification dummy variables and the outage impact variable. We included a set of interaction terms to test whether cities that with special fuel requirements experience higher price increases due to outages. y it is the logarithm of wholesale rack gasoline price at city i in week t. xit is a vector of predetermined variables for city i in week t that are assumed to be independent of our error term, u it, including a lagged of our dependent variable. wit is a vector of possibly end ogenous variables at city i in week t. c is the fixed effect or dummy variable for city i. f is the fixed effect of dummy variable for week t. B is a vector of parameters to be estimated. Our key outage effect variable measures the percent of a city’s product supply affected by an outage; that is: where Outage r efinery that serves the i-th city, and the remaining term is the proportion of product provided by that refinery to that city. When there is no outa ge, Outageir’t is equal to zero. Thus, this variable measures a city’s reduction inan outage (or outages). In the extreme case, there may be a product due to s the impact on product of an outage at that refinery on that city would b large, with a concomitant effect on that city’s gasoline prices. ir’t is equal to 1 when an outage occurs at time t in the r’-th ingle refinery that supplies 100 percent of a city’s product, in which case The outage impact may also have varied according to the type of fuel. The variable, sirt measures the percentage of supply of product that was interrupted; it may not account completely for the difficulty in finding a replacement for that product. If a city used a fuel that is commonly produced, such as conventional clear gasoline, it would likely be more straightforward to find an alternative source of supply. However, if the city uses a special fuel, it may be more difficult to find an alternative refinery to supply that product. Therefore, in addition to a set of dummy variables for each fuel specification, we included a set of interaction terms of our outage supply affect variable with each of the fuel specificati n dummy o variable s. We used xtiverg2 in STATA. The xtivreg2 estimation procedure allow us to estimate standard errors that are robust to heteroskedasticity and autocorrelation. ent We estimated the model using the logarithm of price as the depend variable. Note that because we have time dummies, we do not need to control for crude oil prices or price inflation beca invariant across cities for a given time period and so are collinear with the time dummies. Our specification necessarily subsu variables that only vary over time and not vary across cities. We used a C-statistic test to ascertain whether the and the capacity utilization rate should be treated as endogenous or exogenous. In the case of both the unbranded gasoline prices and the branded gasoline price models, our test could n hypothesis that these variables were exogenous. Measures of market concentration, such as the Hirschman Herfindahl Index (HHI), have been shown to be endogenous, so we tested for whether it was exogenous and use two-stage least squares when appropriate, using merger events as instruments. We used a C-statistic to test for the exogeneity of the spot market HHI. In the case of the unbranded gasoline price model, the test rejected the null hypothesis of exogeneity. In the case of the branded price model, the test could not reject the null hypothesis of exogeneity. We estimated both models treating the spot market HHI as endogenous, which we recognize might be a less efficient estimator but is nevertheless a consistent estimator. We used Hansen’s J-statistic to test for over-identification of our instruments; namely, that they should be correlated with the regressors, but uncorrelated with the regression errors. In every case, the J-sta accepted the null hypothesis that our instruments were valid. tistic We estimated separate models for unplanned and planned outages. While unplanned outages can be reasonably viewed as exogenous—random— events, planned outages need to be scheduled more than a year in advance a nd may be scheduled to coincide with time periods of typically lower seasonal demand. Therefore, we believe it was appropriate to model th two types of outages separately. We estimated separate models for unbranded prices and branded prices. We estimated the model (1) except that we dropped those ob here waivers were in effect. We found increase in unbranded gasoline prices. We found this impact is generally positive with respect to the price of all fuels. We further found this impact is significantly greater than the c relative to the effect on conventional clear) for unplanned outages were significantly associated with an omparative or base e severa ffect (me l special ed . In addition, we found unplanned outages were significantly associated with an increase in branded gasoline prices but the effect was for unbranded prices. This impact is generally positive with respect to th price of all fuels. There is also evidence that th special fuels although in fewer cases compared to the unbranded price results. e impact is greater for som Our results using planned outages to explain prices statistically significant impact on gasoline product prices, either branded or unbranded. found no general We found no substantive difference in our results for outage effects when we estimated the model (1) without those observations where waivers were in effect. Cities included in model. Our selection of cities availability from the Baker & O’Brien data. T wholesale racks in the U.S. however: aker & O’Brien data contain data on only 89 cities, and only 75 of those had complete series dat a that we could use for our model. & O’Brien cities comprise the most important city racks. Treating each of the 35 0 city racks as independent rack markets may not be appropriate. Rather, we can obtain a national picture by selecting the most important cities as per the list of cities in the Baker & O’Brien data. Time period of analysis. We se 2008 because we deemed 2002 onward. lected January 2002 through September the IIR data to provide reliable information from asoline type. The gasoline data from OPIS were selected so as to G generally reflect the type of gasoline that would be sold in a city, giv local fuel regulations. In most cases, we were able to assign prices accordingly but in some cases other types of fuel were used in the data. e However, in the regression model, we control for whatever fuel type w did use. Outages data. We believe the outages data from IIR are fairly comprehensive but there are no federal requirements for refineries to report outages or an effort by the federal government to collect these data on a national basis. Consequently, some outages may not appear in the data, though it is unlikely that any major outages were missed during our study period. Further, we limited inclusion of outages to those that were at least 3 days in duration and ranked in the top 60 percent in terms of recorded capacity offline for a refinery’s market region (as defined by IIR) Thus, we do not include every single outage but we have a broad geographic range of the largest outages in the US. . inkage data between refineries and cities they serve. The Baker & ’Brien data has the following limitations: They are quarterly estimates of product flows and costs. These data are intended to be reasonably reflective of actual product dispersion across the United States. However, in the course of our analysis we had to interpolate some missing data and to extrapolate our data beyond the end-points of the available data. The Baker & O’Brien data did not always contain complete data for the particular fuel that regulations req cases, seasonal variations in fuel requirements, such as RVP or oxygenate blending specifications, meant a precise match was not uired be used in that city. In some possible. However, in general, we were able to match the Baker & O’Brien fuel with these regulations. Frequency o our other data were either monthly or quarterly, so we had to parse out th lower frequency observations accordingly. f data. Except for our weekly wholesale gasoline price data, Geographic level of a but some of the data we used were at a more aggregated geographic leve We used capacity utilization and inventory-sales ratio data at the PADD level. We did not have a measure of city-level sales data to determine the e to a local market, nor is there a relevant size of inventories relativ measure of capacity utilization at city-level, therefore, PADD-level d were used., nalysis. Our analysis was performed at the city level, l. Economic indicators. Employment growth, personal income growth rate and the unemployment rate were available at the state-level only. Market con measured using corporate refinery capacity at the spot market possible in some cases these measures were too highly aggregated and control variables were less precise than would be ideal. centration. Our measure of market concentration was an HHI Number of outages. We did not take account of multiple outages at the same refinery on the same day–we simply established whether an outage occurred in a particular week, at a particular refinery. Although the size o the outage determined whether it was included in our analysis, the impac is treated the same regardless of how large an impact on the refinery theoutage had. Effects of an outage over time. effects of outages on prices in our model. We assigned an effect of outage in the same time period (week), after which time our model implicitly assumed that the product was supplied from an alternate source. We did not attempt to include dynamic Planned outages. We did not model planned outages in any detail. Thes planned events by definition, did not gen reductions in product supply. Hence, vendors had the opportunity to plan ahead and make arrangements to receive alternative sources of product. del to equation (1) and found However, we did estimate an analogous mo no significant impact on prices. erally give rise to surprise Inventories. Inventories included those domestic and customs-cleared foreign stocks held at, or in transit to, refineries and bulk terminals, and stocks in pipelines. Gasoline sold outside the city rack: Our analysis does not account for gasoline that is not sold at the city rack. It is possible that significant transactions occur elsewhere that may affect the general wholesale market for a particular city. Examining wholesale prices, not retail prices. Our analysis is at the wholesale price level and the ramifications for retail prices are unclear. The effect on retail prices would depend upon the extent to which wholesale price changes are passed onto the retail sector. Seasonal effects. Our model included of a set of time dummy variables, which account for variation in prices due to seasonal effects. A more complete model might have contained specific seasonal effects such as a set of monthly dummy variables, interacted with the outage effect, and also with each special fuel type. This would have allowed us to determine whether outages had a differential impact on prices, according to the time of year and the fuel type. However, data limitations precluded a comprehensive evaluation of such effects; specifically, this would have required us to include more than 200 additional explanatory variables (number of seasonal dummies times the number of special fuel types). In addition to the individual named above, Shea Bader, Divya Bali, Benjamin Bolitzer, Dan Haas, Michael Kendix, Rob Marek, Michelle Munn, Alison O’Neill, Rebecca Sandulli, Benjamin Shouse, and Barbara Timmerman made key contributions to this report.
In 2008, GAO reported that, with the exception of the period following Hurricanes Katrina and Rita, refinery outages in the United States did not show discernible trends in reduced production capacity, frequency, and location from 2002 through 2007. Some outages are planned to perform routine maintenance or upgrades, while unplanned outages occur as a result of equipment failure or other unforeseen problems. GAO was asked to (1) evaluate the effect of refinery outages on wholesale gasoline prices and (2) identify gaps in federal data needed for this and similar analyses. GAO selected refinery outages from 2002 through September 2008 that were at least among the largest 60 percent in terms of lost production capacity in their market region and lasted at least 3 days. GAO developed an econometric model and tested a variety of assumptions using public and private data. While some unplanned refinery outages, such as those caused by accidents or weather, have had large price effects, GAO found that in general, refinery outages were associated with small increases in gasoline prices. Large price increases occurred when there were large outages; for example, in the aftermath of hurricanes Katrina and Rita. However, we found that such large price increases were rare, and on average, outages were associated with small price increases. For example, GAO found that planned outages generally did not influence prices significantly--likely reflecting refiners' build-up in inventories to meet demand needs prior to shutting down--while for unplanned outages, average price effects ranged from less than one cent to several cents-per-gallon. Key factors influenced the size of price increases associated with unplanned outages. One such factor was whether the gasoline was branded--gasoline sold at retail under a specific refiner's trademark--or unbranded--gasoline sold at retail by independent sellers. Our analysis showed that during an unplanned outage, branded wholesale gasoline prices had smaller price increases than unbranded, suggesting that refiners give preference to their own branded customers during outages, while unbranded dealers must seek out supplies in a more constrained market. Another factor that affected the size of price increases associated with outages was the type of gasoline being sold. Some special blends of gasoline developed to reduce emissions of air pollutants exhibited larger average price increases than more widely used and available conventional gasoline, suggesting that these special gasoline blends may have more constrained supply options in the event of an outage. Existing federal data contain gaps that have limited GAO's and Department of Transportation's (DOT) analyses of petroleum markets and related issues. For example: (1) Data linking refiners to the markets they serve were inadequate for GAO to fully evaluate the price effects of unplanned outages on individual cities, limiting the analysis to broader average effects. (2) Pipeline flow and petroleum product storage data were inadequate for DOT to fully address a January 2009 Congressionally mandated study to identify potential pipeline infrastructure constraints, and limited GAO's ability to identify re-supply options for cities experiencing outage disruptions. Federal agencies generally have continued to update their data collection surveys to meet their respective needs and emerging changes in the energy sector. However, in some cases the individual agency efforts have resulted in the collection of information that does not necessarily meet the data needs of other agencies or analysts who monitor petroleum product markets.
The mission of INS, an agency of the Department of Justice, is to administer and enforce the immigration laws of the United States. To accomplish this, INS is organized into three core business areas— enforcement, immigration services, and corporate services. Enforcement includes, among other things, conducting inspections of travelers entering the United States as they arrive at more than 300 land, sea, and air ports of entry; detecting and preventing the smuggling and illegal entry of aliens; and identifying and removing persons who have no lawful immigration status in the United States. Immigration services, which involve regulating permanent and temporary immigration to the United States, include granting legal permanent residence status, nonimmigrant status (e.g., tourists and students), and naturalization. Corporate services include records management, financial management, personnel management, and inventory management support for INS activities. INS' IT assets play a significant role in (1) receiving and processing naturalization and other benefit applications, (2) processing immigrants and nonimmigrants entering and leaving the United States, and (3) identifying and removing people who have no lawful immigration status in the United States. For example, the Computer-Linked Application Information Management System (CLAIMS 4) is a centralized case management tracking system, that offers support for a variety of tasks associated with processing and adjudicating naturalization benefits. In addition, the Deportable Alien Control System (DACS) automates many of the functions associated with tracking the location and status of illegal aliens in removal proceedings, including detention status. INS has multiple efforts underway to develop and acquire new information systems and to maintain existing ones. According to INS, in fiscal year 2000, it obligated about $327 million on IT activities, including about $94 million for new development and the remaining amount, which includes enhancing existing systems, for operations and maintenance. For example, INS obligated $14.5 million in fiscal year 2000 to continue development of CLAIMS 4, which supports the processing of applications and petitions for immigrant benefits and is intended to fully replace CLAIMS 3. In addition, INS obligated about $18 million in fiscal year 2000 to further deploy its Integrated Surveillance Intelligence System (ISIS), which includes the deployment of intelligent computer aided detection systems, unattended ground sensors, and fixed cameras along the northern and southern borders to provide around-the-clock visual coverage of the border. For fiscal year 2001, INS plans to spend about $226 million on IT for operations and maintenance activities. INS funds most of its IT efforts with operation and maintenance funds and currently is developing or maintaining 74 information systems. Recent reviews have identified several weaknesses in INS' management of its IT projects. For example, in August 1998, the Logistics Management Institute (LMI) reported that INS' Office of Information Resources Management (OIRM) (1) did not maintain accurate cost estimates for the complete life cycle of projects and (2) did not track and manage projects to a set of cost, schedule, technical, and benefit baselines. Further, LMI noted that while INS' System Development Life Cycle (SDLC) manual provides a good model for systems development projects, OIRM did not consistently follow it, often bypassing key SDLC phases. Similarly, in July 1999, the Justice Inspector General (IG) reported that (1) estimated completion dates for some INS IT projects had been delayed without explanation for the delays, (2) project costs continued to spiral upward with no justification for how funds are spent, and (3) projects were nearing completion with no assurance that they would meet performance and functional requirements. Recognizing the need to address these weaknesses, INS established an Operational Assessment Team to analyze reported weaknesses and recommend specific actions to address them. The Operational Assessment Team validated the deficiencies identified in the LMI and Justice IG reports and identified additional ones. For example, the team found that system requirements were not consistently collected, recorded, documented, tracked, and controlled. To illustrate, of 105 projects reviewed by the team, fewer than 50 percent had documented requirements and most of the requirements that had been documented were not current. Further, in August 2000, we reported that INS did not have an enterprise architecture to guide the development and evolution of its information systems. An enterprise architecture is an institutional systems blueprint that defines in both business and technological terms the organization's current and target operating environments and provides a road map for moving from one to the other. It is required by the Clinger-Cohen Act and is a recognized practice of successful public and private sector organizations. INS had initiated some limited efforts to document its current architecture, but it had not yet begun developing a target architecture or a plan to move from the current to the target environment. Moreover, INS had not yet established the management structure and controls to develop the architecture. The absence of such an enterprise architecture increases the risk that the hundreds of millions of dollars INS spends each year on information systems will not be well integrated or compatible and will not effectively support mission needs and priorities. In 1997, INS established an investment review board (IRB). The IRB consists of four voting members—the Deputy Commissioner (Chair) and INS' three Executive Associate Commissioners—and advisory or supporting members, including the Director of the Budget Office and the Acting Associate Commissioner of the Office of Information Resources Management. In November 1998, INS also established the Executive Steering Committee (ESC) to support the IRB. The ESC comprises portfolio managers and advisory members, which analyze investment proposals and make recommendations on these proposals to the IRB. The IRB has established a process for selecting new IT proposals. According to INS officials, new proposals are developed throughout the year as business needs are identified and are forwarded to the appropriate portfolio manager for review. After reviewing the proposal, the portfolio manager forwards it to the ESC for consideration for funding. The ESC examines the proposals submitted and determines the appropriate funding for each project. Once funding is determined, the ESC forwards the proposed funding levels to the IRB, which makes the final investment selections and budget formulation decisions. See figure 2 for INS' new proposal selection process. As part of INS' annual budget execution process, the IRB considers the funding requests of ongoing and new projects. Project managers define requirements for their ongoing projects, which they submit to the responsible portfolio managers for review. After reviewing the requirements and funding requests, each portfolio manager submits them to the ESC for review and to the IRB for approval. The approved funding is submitted to the Budget Office for inclusion into its budget execution process. According to INS officials, new proposals are considered for funding only after ongoing projects have been funded. Several recent management reforms—including the revision to the Paperwork Reduction Act and the passage of the Clinger-Cohen Act of 1996, the Government Performance and Results Act of 1993, and the Chief Financial Officers Act of 1990—have introduced requirements emphasizing the need for federal agencies to improve their management processes for selecting and managing IT resources. In particular, the Clinger-Cohen Act requires that the head of each agency implement a process for maximizing the value of the agency's IT investments and for assessing and managing the risks of its acquisitions. A key goal of the Clinger-Cohen Act is that agencies have processes and information in place to help ensure that projects are being implemented at acceptable costs within reasonable and expected time frames and that they are contributing to tangible, observable improvements in mission performance. We and the Office of Management and Budget (OMB) have developed guidance to assist federal agencies in managing IT investments. One such guide, Assessing Risks and Returns: A Guide for Evaluating Federal Agencies' IT Investment Decision-making, incorporates our analysis of the management practices of leading private and public sector organizations as well as the provisions of major federal legislation (e.g., Clinger-Cohen Act) and executive branch guidance that address investment decision-making. The guide provides a method for determining how well a federal agency is selecting and managing its IT resources and identifies specific areas where improvements can be made. To enhance this guidance, we issued an Information Technology Investment Management (ITIM) maturity framework in May 2000. ITIM provides a common framework for assessing IT capital planning and investment management practices by describing the organizational processes, and their interrelationships that are the tenets of good investment management. ITIM is based on the best-practices work done as part of our ongoing research into the IT management practices of leading organizations. ITIM is a hierarchical model comprising five maturity stages. These maturity stages represent steps toward achieving stable and mature investment management processes. As agencies advance through the model's stages, their capability to manage IT increases. Each stage builds upon the lower stages and enhances the organization's ability to manage its investments. With the exception of the first stage, each maturity stage is composed of critical processes that must be implemented and institutionalized for the organization to satisfy the requirements of that stage. These critical processes are further broken down into key practices that describe the types of activities that an agency should be engaged in to successfully implement each critical process. An organization that has these critical processes in place is in a better position to successfully invest in IT. (See figure 3 for the five stages and associated critical processes). As established by the model, each critical process contains five core elements that indicate whether the implementation and institutionalization of a process can be effective and repeated. The five core elements are: Purpose: This is the primary reason for engaging in the critical process and states the desired outcome for the critical process. Organizational commitment: This comprises management actions that ensure that the critical process is established and will endure. Key practices typically involve establishing organizational policies and engaging senior management sponsorship. Prerequisites: These are the conditions that must exist within an organization to successfully implement a critical process. This typically involves allocating resources, establishing organizational structures, and providing training. Activities: These are the key practices necessary to implement a critical process. An activity occurs over time and has recognizable results. Key practices typically involve establishing procedures, performing and tracking the work, and taking corrective actions as necessary. Evidence of performance: This comprises artifacts, documents, or other evidence that supports a contention that the key practices within a critical process have or are being implemented. This core element typically consists of the collection and verification of physical, documentary, or testimonial evidence and typically involves reviews by objective parties. With the exception of the purpose core element, each of the other core elements contains key practices. The key practices are the attributes and activities that contribute most to the effective implementation and institutionalization of a critical process. (Figure 4 shows the relationship between the various ITIM components.) Our objectives were to determine whether (1) INS is effectively managing its IT investments and (2) the Department of Justice is effectively promoting, guiding, and overseeing INS' investment management activities. To determine whether INS is effectively managing its investments, we applied our ITIM framework and the associated assessment method. As part of the ITIM assessment method, INS conducted a self-assessment of its IT investment management activities using the ITIM framework. In its self- assessment, INS indicated whether it executed each of the key practices in stages two through five. INS asserted that it executed many of the key practices within stages two and three but only four key practices in all of stages four and five. Accordingly, we did not include ITIM stages four and five in the scope of our review. Also, we did not evaluate the key practices within stages two and three that INS stated it had not executed. We evaluated INS against 9 of the 10 critical processes in stages two and three. We did not evaluate INS against the stage three critical process Authority Alignment of IT Investment Boards. This critical process is only relevant if an organization has more than one IT investment board and INS has only one. The nine critical processes we examined focus primarily on INS' ability to effectively select and control its IT investments. To determine whether INS had implemented these nine critical processes, we evaluated policies, procedures, and guidance related to INS' IT investment management activities. In particular, we analyzed the following: organizational charters, INS' System Development Life Cycle manual, requirements management process guide, and administrative manuals (e.g., Personal Property Handbook). We also reviewed documentation associated with specific investment management activities, such as IRB and ESC meeting minutes, project management plans, system deployment plans, budget formulation and execution plans, quarterly reports to Justice, and contractor statements of work. In addition, we reviewed four IT projects to verify the execution of INS- defined processes, procedures, and practices. The four projects were selected based on the following criteria: (1) the projects should represent different life cycle phases (e.g., requirements definition, design, operations and maintenance), (2) the projects should support different INS business areas (e.g., Examinations, Enforcement), (3) at least one project should be considered high risk, and (4) at least one project should have been reviewed by Justice's Information Technology Investment Board (ITIB). The projects we evaluated are: Coordinated Interagency Partnership Regulating International Students (CIPRIS): CIPRIS is an Internet-based system that is intended to modernize and streamline the current process for collecting information relating to nonimmigrant foreign students and other exchange program participants. It is intended to enable U. S. universities, schools, and cultural exchange programs to report and share information electronically with INS and other government regulatory agencies. INS has implemented an operational prototype of CIPRIS at 21 educational institutions. CIPRIS is a concept exploration project that supports the Examinations business area within INS. INS has designated CIPRIS as a high-risk project and it has been reviewed by Justice's ITIB. According to INS, it obligated about $3.1 million for CIPRIS in fiscal year 2000. Computer-Linked Application Information Management System (CLAIMS) 4.0: According to INS, CLAIMS 4 is intended to improve delivery of naturalization services by fully automating INS' case management system. According to INS, CLAIMS 4 supports the Immigration Services Program within INS and is currently operational at 59 sites. According to INS, it obligated $14.5 million for CLAIMS 4 in fiscal year 2000. Integrated Surveillance Intelligence System (ISIS): ISIS was established to detect and deter illegal intruders and to safely apprehend illegal aliens on the U.S.-Mexico and U.S.-Canada borders. ISIS is designed to provide all-weather sensor and video surveillance of the U.S. borders 24 hours a day, 7 days a week. The major components of ISIS are the Intelligent Computer-Assisted Detection system, ground sensors, and the Remote Video Surveillance system. ISIS supports the Enforcement program area within INS and has been reviewed by Justice's ITIB. According to INS, it obligated about $18 million for ISIS in fiscal year 2000 to further deploy the system. Central Index System (CIS): CIS provides INS with information about persons of interest to the INS. According to INS, CIS also interacts with various INS databases to provide the data necessary for INS operations. CIS currently maintains approximately 45 million detailed records on individuals of interest to INS. CIS supports the INS' Corporate business area and is in the operations and maintenance phase of its life cycle. According to INS, it obligated about $2.6 million for CIS in fiscal year 2000. We did not validate INS' IT spending obligations for fiscal year 2000 and IT spending estimates for fiscal year 2001. To supplement our document reviews, we interviewed senior INS officials, including the Deputy Commissioner, who chairs the IRB, and the Executive Associate Commissioner for Management, who is the Chief Information Officer (CIO) and an IRB member. We also interviewed the Acting Associate Commissioner for Information Resources Management, who chairs the ESC; the Director of INS' Investment Management Team; portfolio managers; the Director of the Office of Strategic Information and Technology Development; IT project managers; program managers; Office of Budget representatives; and officials involved with the development and maintenance of INS' asset tracking systems. We compared the evidence collected from our document review and interviews to the key practices and critical processes in ITIM. Because ITIM is a hierarchical framework, the rating of each critical process is dependent on the key practices below it. Therefore, we first rated the key practices. In accordance with the ITIM assessment method, we rated a key practice as “executed” when we determined, by consensus, that INS was executing the key aspects of the practice. A key practice was rated as “not executed” when we determined that there were significant weaknesses in INS' execution of the key practice and INS offered no adequate alternative, or when the team found no evidence of a practice during the review. Once the key practices were rated, we rated each of the nine critical processes we reviewed. A critical process was rated as “implemented” if all of the underlying key practices were rated as being executed. A critical process was rated as “not implemented, but improvements underway” if over half, but not all, of its underlying key practices were rated as being executed. A critical process was rated as “not implemented” when there were significant weaknesses (i.e., fewer than 50 percent of the key practices had been implemented) in INS' implementation of the underlying key practices and no adequate alternative was in place. To determine whether the Department of Justice is effectively promoting, guiding, and overseeing INS' investment management activities, we interviewed officials within the Office of Information Management and Security Staff, the organization that plays a leading role in Justice's investment management activities. We also reviewed Justice's January 2000 investment management guidance, draft policy and guidance documents, INS project proposals, ITIB review and decision documentation, and quarterly briefing documents. We also discussed Justice's oversight activities with various officials within INS. We conducted our work at INS and Justice headquarters in Washington, D.C., from May 2000 through October 2000 in accordance with generally accepted government auditing standards. Justice's Assistant Attorney General for Administration provided written comments of a draft of this report. These comments are presented in chapter 5 and are reprinted in appendix I. The primary purpose of ITIM stage two maturity is to attain repeatable, successful IT project-level investment control processes and basic selection processes. For an organization to develop an overall sound IT investment management process, it must first be able to control its investments so that it can identify expectation gaps early and correct them. According to ITIM, stage two maturity includes (1) defining IRB operations, (2) developing a basic process for selecting new IT proposals, (3) developing project-level investment control processes, (4) creating an IT asset inventory, and (5) identifying the business needs for each IT project. INS has not fully implemented any of the critical processes associated with stage two; however, it has improvements underway and is close to fully implementing two of these processes. INS has (1) established an IRB, which comprises both IT and business senior executives and functions as INS' central decision-making body for IT projects, and (2) the IRB has followed a structured process for developing and selecting new IT proposals and making initial funding decisions for these proposals. However, INS has not yet developed some of the capabilities necessary to build a sound IT investment management process. For example, INS has not (1) established basic project-level control processes to ensure that its IT projects are performing as expected, (2) created an IT asset inventory for investment management, and (3) defined business needs for all of its IT projects. According to INS, it lacks these critical investment capabilities because it has not yet made IT investment management an institutional priority. Table 1 summarizes INS' stage two maturity. INS' capabilities for each of the stage two critical processes are discussed below. The purpose of this critical process is to define and establish the governing board or boards responsible for selecting, controlling, and evaluating IT investments. This includes defining the membership, guiding policies, operations, roles and responsibilities, and authorities for the investment board and, if appropriate, each board's support staff. These policies, roles and responsibilities, and authorities also provide the basis for the board's investment selection, control, and evaluation activities. According to ITIM, effective IT investment board operations require, among other things, that (1) the board membership include both IT and business knowledge, (2) the organization's executives and line managers support and carry out board decisions, (3) the organization create an organization-specific process guide that includes policies and procedures to direct the board's operations, and (4) the IRB operate according to these written policies and procedures. INS is executing many of the practices in this critical process. For example, INS has an IRB that functions as a central decision-making body for IT investments and is composed of senior executives from both INS' IT and business areas. During our discussions with agency officials, we found broad support within the organization for the IRB's decisions. For example, three of the four program/project managers we interviewed acknowledged the IRB's role in investment decision-making. The IRB is chaired by the Deputy Commissioner and includes INS' three Executive Associate Commissioners. The IRB is supported by an ESC, which is comprised of senior representatives who manage INS' eight IT portfolios. The ESC reviews and analyzes IT investments and makes recommendations to the IRB for final approval. This senior level involvement and the breadth of representation help to demonstrate executive sponsorship of the process and support for the projects selected. While INS has an IRB, it is not functioning according to written policies and procedures. Instead, the IRB operates according to undocumented procedures for selecting new IT proposals. According to the Director of INS' Investment Management Team, INS has begun developing written policies and procedures and plans to complete them about March 2001. However, until INS develops and implements these policies and procedures, key IT investment activities may not be done consistently, if at all. Table 2 summarizes the ratings for each key practice and the specific findings supporting the ratings. The purpose of project oversight is to ensure that the IRB provides effective oversight for its ongoing IT projects throughout all phases of their life cycle. Under stage 2 maturity, the IRB should review each project's progress toward predefined cost and schedule expectations, using established criteria, and take corrective actions when cost estimates and project milestones are not achieved. Implementing this critical process provides the basis for evolving the organization's IT investment control activities. According to ITIM, effective project oversight requires, among other things, (1) having written polices and procedures for project management, (2) developing and maintaining an approved project management plan for each IT project, (3) having written policies and procedures for oversight of IT projects, (4) making up-to-date cost and schedule data for each project available to the IRB, (5) reviewing each project's performance by comparing actual cost and schedule data to expectations regularly, and (6) ensuring that corrective actions for each underperforming project are defined, implemented, and tracked until the desired outcome is achieved. INS is not effectively overseeing its IT projects. While INS has documented policies and procedures for project management in its System Development Life Cycle (SDLC) manual, it is not following its own procedures. For example, INS has not developed and maintained project management plans that include cost and schedule controls for each of its IT projects, an SDLC requirement. In fact, only two of the four projects that we reviewed had current project management plans. Furthermore, INS does not have written polices and procedures for oversight of its IT projects. Without written polices and procedures, INS increases the risk that project oversight activities will not be performed effectively. For example, the IRB does not (1) receive up-to-date cost and schedule data for each project, (2) oversee each project's performance regularly by comparing actual cost and schedule data to expectations, and (3) ensure that corrective actions are implemented and tracked for underperforming projects. In the absence of effective oversight, INS executives do not have adequate assurance that IT projects are being developed on schedule and within budget. Table 3 summarizes the ratings for each key practice and the specific findings supporting the ratings. The purpose of the asset tracking critical process is to create and maintain an IT asset inventory to assist in managerial decision-making. To make good investment decisions, an organization must know where its IT assets (i.e., personnel, systems, applications, hardware, software licenses, etc.) are located and how funds are being expended toward acquiring, maintaining, and deploying them. This critical process identifies IT assets within the organization and creates a comprehensive inventory of them. This inventory can take many forms, but regardless of form, the inventory should identify each asset and its associated components. Beyond identifying IT assets, this process is used to support other ITIM critical processes by serving as an investment information and data repository that contains such items as the list of systems and projects and data on each project's progress toward achieving its plans. To support investment decision-making, this inventory should also be accessible where it is of the most value to decisionmakers. According to ITIM, effectively tracking IT assets requires, among other things, (1) making investment information available on demand to decisionmakers, (2) developing and maintaining an IT asset inventory according to written procedures, (3) overseeing the development and maintenance of the asset tracking process, and (4) assigning responsibility for managing this tracking process. INS has not implemented an effective IT asset tracking process for investment management. While investment information from various sources has been available to the IRB on an ad hoc basis, it is not available on demand and INS has not developed and maintained an inventory for investment management purposes according to written policies and procedures. In addition, the IRB does not oversee IT asset tracking activities and has not assigned responsibility for managing this tracking process to support investment decision-making. In the absence of standard, documented procedures for developing and maintaining the inventory, INS executives do not have adequate assurance that timely, complete, and consistent asset data are available to them. Table 4 summarizes the ratings for each key practice and the specific findings supporting the ratings. The purpose of defining business needs for each IT project is to ensure that each project supports the organization's business needs and meets users' needs. Thus, this critical process creates the link between the organization's business objectives and its IT management strategy. According to ITIM, effectively identifying business needs requires, among other things, (1) defining the organization's business needs or stated mission goals, (2) identifying users for each project who will participate in the project ‘s development and implementation, (3) defining business needs for each project, and (4) training IT staff in business needs identification. INS has executed some of the key practices associated with effectively defining business needs for IT projects. For example, INS has (1) defined its business needs and mission goals in its annual performance plan and (2) identified users for its projects who participate in the project ‘s development and implementation. However, INS has not clearly defined specific business needs for each project. In addition, only one of the four project managers that we interviewed stated that he or she had been trained in business needs identification. In the absence of documented business needs, the IRB cannot ensure that it is selecting IT investments that meet its mission needs and priorities. Table 5 summarizes the ratings for each key practice and the specific findings supporting the ratings. The purpose of proposal selection is to establish a structured process for selecting new IT proposals. According to ITIM, effective proposal selection requires, among other things, (1) designating an official to manage the proposal selection process, (2) using a structured process to develop new IT proposals, (3) making funding decisions for new IT proposals according to an established selection process, and (4) analyzing and ranking new IT proposals according to established selection criteria, including cost and schedule criteria. INS has established a structured process for selecting new IT proposals. The Deputy Commissioner, as the Chair of the IRB, is designated to manage INS' proposal selection process. In addition, INS uses a structured process to develop new proposals and makes initial funding decisions for these proposals. However, INS has not consistently analyzed and ranked these proposals according to established selection criteria. Established selection criteria would assist IT managers in creating proposals that best meet the needs and priorities of INS. Table 6 summarizes the ratings for each key practice and the specific findings supporting the ratings. An IT investment portfolio is a collection of investments that are assessed and managed based on common criteria. While an organization may have more than one level of investment portfolios, it should always have an enterprisewide portfolio. Managing investments as a portfolio is a conscious, continuous, and proactive approach to expending limited resources on all competing initiatives in light of the relative beneficial effects of these investments. Taking an enterprisewide portfolio perspective enables an organization to consider its investments comprehensively so that the investments address its mission, strategic goals, and objectives. A portfolio approach also allows an organization to determine priorities and make decisions about which projects to fund based on analyses of the relative costs, benefits, and risks of all projects, including projects that are proposed, under development, and in operation. The purpose of ITIM stage three maturity is to create and manage IT investments as a complete enterprise investment portfolio. Once ongoing projects can be implemented on schedule and within budget as is emphasized in stage two, the organization is capable of managing its projects as an investment portfolio. According to ITIM, stage three maturity includes (1) defining portfolio selection criteria, (2) engaging in project- level investment analysis, (3) developing a complete portfolio based on the investment analysis, and (4) maintaining oversight over the investment performance of the portfolio. INS has not implemented any of the critical processes in stage three. In general, INS has not created the associated policies and procedures to initiate or perpetuate any of the critical processes, and as a result, it has not systematically collected and analyzed the data needed to make sound and informed decisions about competing investment choices, which consciously consider value and risk. In addition, while INS has established eight portfolio categories, it has not established an enterprisewide investment portfolio. Therefore, decisions may be made between competing investments within a business area, but INS cannot make trade- offs between investments across the enterprise to determine which projects contribute most to the agency mission and priorities. According to INS officials, INS has not yet made IT investment management an institutional priority. Table 7 summarizes INS' stage three maturity. INS' capabilities for each of the stage three critical processes are discussed below. Portfolio selection criteria make up a necessary part of an IT investment management process. Developing an enterprisewide investment portfolio involves defining appropriate investment cost, benefit, schedule, and risk criteria to ensure that the selected investments will best support the organization's strategic goals, objectives, and mission. Thus, portfolio selection criteria need to reflect the enterprisewide and strategic focus of the organization. In addition, the criteria should (1) include cost, benefit, schedule, and risk elements, which serve to create a common set of criteria that are used to compare projects of different types to one another and (2) be clearly communicated to project managers throughout the organization so that these managers can take the criteria into account when developing proposals. Without portfolio selection criteria, projects may be selected on the basis of isolated business needs, the type and availability of funds, or the receptivity of management to a specific project proposal. Thus, according to ITIM, developing portfolio selection criteria requires, among other things, that (1) an investment board approve the criteria, including cost, benefit, schedule, and risk criteria; (2) the criteria be distributed throughout the organization; (3) adequate resources be provided for selection criteria definition activities; and (4) a working group be responsible for creating and modifying the criteria. INS developed criteria for selecting new proposals; however, the criteria had not been approved by the IRB and did not consistently include cost, schedule, benefit, and risk criteria. Furthermore, INS had not distributed the criteria throughout INS. For example, none of the IT project and program managers that we interviewed were aware of the selection criteria that had been developed. In addition, while INS indicated that it has adequate resources to develop complete portfolio selection criteria, it has not designated a working group to create and modify the criteria. Without useful selection criteria, INS is missing a critical means of ensuring that selected investments best support the organization's mission and priorities. Table 8 summarizes the ratings for each key practice and the specific findings supporting the ratings. The purpose of investment analysis is to ensure that all IT investments are consistently analyzed and prioritized according to the organization's portfolio selection criteria, which should include cost, benefit, schedule, and risk criteria. According to ITIM, effective investment analysis includes, among other things, that (1) portfolio selection criteria have been developed; (2) the IRB ensures that cost, benefit, schedule, and risk data are assessed and validated for each investment; (3) the IRB compares each investment against the organization's portfolio selection criteria; and (4) the IRB creates a ranked list of investments using the portfolio selection criteria. INS' IRB does not analyze and rank proposed and ongoing investments based on their expected cost, benefit, schedule, and risk. As mentioned previously, INS has not developed selection criteria that include these elements, nor has it ensured that cost, benefit, schedule, and risk data are assessed and validated for each IT investment. For example, none of the four projects we reviewed provided cost, benefit, schedule, or risk data to INS' IRB for consideration during the selection process. Instead, the IRB focused on the near-term cost (e.g., annual budget dollars) of each project and the perceived importance of the project to INS' mission. In the absence of portfolio selection criteria and good investment-related data (i.e., cost, benefit, schedule, and risk data), the IRB cannot compare and analyze its investments based on their cost, benefit, schedule, and risk expectations and create a ranked list of investments that best align with mission improvement goals and organizational direction. As a result, INS is missing critical information for making sound IT investment decisions. Table 9 summarizes the ratings for each key practice and the specific findings supporting the ratings. The purpose of the portfolio development process is to ensure that the IRB analyzes and compares all IT investments to select and fund those with manageable risks and returns and that best address the strategic business direction and priorities of the organization. Once this is accomplished, investments can be compared to one another within and across the portfolio categories and the best overall portfolio can then be selected for funding. According to ITIM, portfolio development requires, among other things, (1) defining common portfolio categories and assigning each investment to a portfolio category; (2) ensuring that investments have been analyzed and their cost, benefit, schedule, and risk data validated; and (3) examining the mix of investments across the portfolio categories in making funding decisions. INS does not assess all its IT projects in making selections for funding. While INS has defined common portfolio categories, it is not using them to manage its investments. INS has created eight portfolio categories and assigned all of its investments to one of the portfolios. However, the IRB has not analyzed these investments, including both proposed and ongoing projects, based on validated cost, benefit, schedule, and risk data. Without these meaningful data, the IRB cannot compare its investments across portfolio categories. As a result, the IRB cannot make trade-offs between investment alternatives, determine which projects contribute most to agency performance, or eliminate redundant systems. Table 10 summarizes the ratings for each key practice and the specific findings supporting the ratings. The purpose of the portfolio performance oversight critical process is to ensure that each IT investment achieves its cost, benefit, schedule, and risk expectations. This critical process builds upon the IT Project Oversight critical process by adding the elements of benefit measurement and risk management to an organization's investment control capacity. Executive- level oversight of project-level risk and benefit management activities provides the organization with increased assurance that each investment will achieve the desired cost, benefit, schedule, and risk results. According to ITIM, effective portfolio performance oversight requires, among other things, that the IRB (1) have access to up-to-date cost, benefit, schedule, and risk data; (2) monitor the performance of each investment in its portfolio by comparing actual project-level cost, benefit, schedule, and risk data to the predefined expectations for the project; and (3) correct poorly performing projects. INS does not monitor its investments' performance to ensure that they are meeting cost, benefit, schedule, and risk performance expectations. As mentioned previously, up-to-date cost, benefit, schedule, and risk data are not available. Without these data, the IRB is unable to monitor the performance of its investments to ensure that they are achieving their cost, benefit, schedule, and risk expectations and to act when performance problems arise. Table 11 summarizes the ratings for each key practice and the specific findings supporting the ratings. The Clinger-Cohen Act of 1996 imposed rigor and structure on how agencies approach the selection and management of IT projects. Among other things, it requires the head of each agency to implement a process for maximizing the value of the agency's IT investments and assess and manage the risks of its IT investments. It also requires that the agency CIO work with the agency head in implementing this process. As such, Justice is responsible for ensuring that its bureaus and components, including INS, implement an effective IT investment management process. Justice has not provided INS, or any other Justice component, sufficient direction, guidance, and oversight of IT investment management activities. While Justice issued guidance in January 2000 describing its high-level investment management process, the guidance does not address the need or requirement for Justice's components to implement an IT investment management process. Specifically, this guidance does not instruct the components to establish IT investment management processes nor does it establish expectations for doing so. According to Justice officials, Justice had not established these processes because of other competing department priorities, even though the department and its components spent about $3 billion on IT in fiscal years 1999 and 2000. During the course of our work, Justice began drafting IT investment management policy and guidance documents in collaboration with an intercomponent working group. The draft policy directs Justice components to establish and use an IT investment management process and directs the Justice CIO to monitor the components' investment management processes through periodic briefings. A supplemental guidance document provides procedures for developing an investment management process. Justice officials stated that they plan to issue the final policy by the end of December 2000 and the guidance by March 2001. Until Justice issues its policy and guidance and begins monitoring its components' progress, it has no assurance that it has the necessary investment management processes in place to maximize the value of its IT investments and manage the risks associated with them. IT is critical to INS' ability to provide vital services, such as granting naturalization benefits and detecting and preventing the illegal entry of aliens into the United States. Effectively and efficiently managing IT requires, among other things, a structured approach for minimizing the risk and maximizing the return on IT investments. However, INS executives are making investment decisions involving hundreds of millions of dollars without vital data about these investments' relative costs, benefits, and risks. As a result, INS cannot adequately know whether it is making the right investment decisions, whether it has selected the mix of investments that best meets its overall mission and business priorities, or whether these investments are living up to expectations. INS has initiated efforts to establish an IT investment management foundation. However, it is lacking many important foundational investment management capabilities, particularly those relating to controlling projects against predetermined expectations and addressing variances. As a result, it runs the serious risk that its IT projects will be late, cost more than expected, and not perform as intended. INS' use of portfolio categories and portfolio managers provides some structure to its portfolio development process and provides each business area the opportunity to identify the projects that it determines to be the most important to its performance. However, INS' lack of performance data from ongoing projects handicaps the IRB's ability to perform its portfolio oversight function. In addition, the absence of any project-to-project comparison limits the IRB's ability to judge whether its mix of investments best meets its mission needs and priorities. As a result, INS can have little confidence that its chosen mix of IT investments best meets mission goals and priorities and that these investments will be developed within an acceptable level of risk, on time, and within budget. Further, Justice has a statutory role under the Clinger-Cohen Act to ensure that its component agencies, including INS, have effective investment management processes. Until Justice fulfills this role, it has little assurance that INS, or its other components, are investing the department's limited IT resources to maximize return on investment, minimize risk, and best support mission needs. To strengthen INS' investment management capability and address the weaknesses discussed in this report, we recommend that you direct the Commissioner of the Immigration and Naturalization Service to designate development and implementation of effective IT investment management processes as an agencywide priority and manage it as such. Specifically, you should direct the Commissioner to do the following: Develop a plan, within 9 months, for implementing IT investment management process improvements that is based on stages two and three critical processes and specifies measurable goals and time frames, ranks initiatives, defines a management structure for directing and controlling the improvements, establishes review milestones, and recognizes any direction and guidance that Justice issues. This plan should first focus on those critical processes in stage two of ITIM because, collectively, they provide the foundation for building a mature IT investment management process. Submit the plan to the Justice CIO for review and approval. Implement the approved plan and report to the Justice CIO, according to established review milestones, on progress made against the plan's goals and time frames. Further, because the absence of effective investment management processes and an enterprise architecture severely limits INS' ability to effectively manage its IT investments, we recommend that until INS develops a complete enterprise architecture and implements the key practices associated with stages two and three critical processes, as described in this report, you direct the Commissioner to limit requests for future appropriations for IT only to efforts that support ongoing operations and maintenance, but not major enhancements, of existing systems; support INS efforts to develop and implement IT investment management processes and an enterprise architecture; are small, represent low technical risk, and can be delivered in a relatively short period of time; or are congressionally mandated. Further, to improve Justice's guidance and oversight of components' IT investment management process activities, we also recommend that you direct the Justice CIO to follow through on the department's plans to issue an IT investment management policy and guidance to the components and to ensure that the policy and guidance: Directs Justice components and bureaus, including INS, to develop and implement IT investment management processes. Instructs Justice components and bureaus on how to develop an investment management process. This guidance should be based on the investment management guidance contained in this report and, at a minimum, should include component roles, responsibilities, authorities, and policies and procedures for developing an IT investment management process. Directs the Justice CIO to monitor the components' progress in developing and establishing an IT investment management process and take appropriate action if they are not progressing sufficiently. In written comments on a draft of this report, Justice's Assistant Attorney General for Administration generally agreed with our recommendations, although he offered minor wording modifications on two recommendations that he said would increase Justice's ability to fully implement them. The Assistant Attorney General for Administration also disagreed with our finding that Justice is not guiding and directing INS' investment management approach. Justice generally agreed with our recommendation that INS develop and submit to Justice a plan for implementing investment management process improvements. However, Justice suggested that the time frame for developing the plan be clarified such that INS has 6 months to develop and submit its plan to Justice once Justice issues its new IT investment management guidance. Because our recommendation directed INS to consider any Justice guidance and direction in developing its investment management process improvement plan, we modified the recommendation to include an additional 3 months to allow time for Justice to issue its guidance, which it plans to do in March 2001. Justice also concurred with our recommendation that INS limit future appropriation requests for IT to certain investment categories because it lacks an enterprise architecture and effective investment management processes, but suggested that we specify that this recommendation is in effect until INS completes its architecture and implements investment management processes. Because this is the intent of our recommendation, we clarified the recommendation to make this explicit. Also in its comments, Justice agreed that, while INS has some important investment management capabilities, INS still needs to develop effective investment management processes. Further, Justice agreed with our recommendation for Justice to issue an investment management policy and guidance to its components, including INS, that (1) directs components to develop and implement IT investment management processes, (2) instructs components on how to develop and implement these processes based on the investment management framework in our report, and (3) ensures that components' progress in doing so is monitored. Moreover, Justice stated, which we note in our report, that it is now working with its components to develop an IT investment management policy and process, and it has made this a department priority for this year. However, Justice stated that our draft report fails to recognize the extent of Justice's oversight of INS' IT investment management process. Further, it disagreed with our finding that Justice is not guiding and directing INS' investment management approach. Justice stated that it has established guidance for all aspects of IT management that its components are expected to follow and has a process for overseeing components' management of their investments. Justice cited six examples to illustrate its point, such as Justice approval authority of all component IT investments with life-cycle cost over $1 million, Justice establishment of an IT investment board, Justice meetings with components, including Attorney General meetings with the INS Commissioner, and Justice forwarding of OMB budget requirements to components. We do not agree with Justice's position. While we concur that the examples cited by Justice represent important IT management functions to be performed in providing management oversight of individual IT investments, such management oversight is not the focus of our findings, conclusions, and recommendations. Rather, our report addresses Justice's efforts to ensure that its components, including INS, have each defined and implemented effective IT investment management processes. As such, we sought evidence from Justice demonstrating that it has directed its components to establish such processes, provided guidance to its components on how to develop and implement these processes, and monitored its components' progress to determine whether they are implementing such processes. However, besides the steps that Justice initiated during the course of our inquiries and plans to take, which we have described in this report, we found no such evidence. Moreover, Justice stated in its written comments that it agreed with our recommendation for it to provide investment management process direction, guidance, and oversight to its components. Justice's written comments and our evaluation of them are presented in appendix I.
The Immigration and Naturalization Service (INS) invests hundreds of millions of dollars each year in information technology (IT) to help (1) prevent aliens from entering the United States illegally and remove aliens who succeed in doing so and (2) provide services or benefits to facilitate entry, residence, employment, and naturalization to legal immigrants. The Clinger-Cohen Act requires agency heads to implement a process for maximizing the value and assessing and managing the risks of its IT investments. GAO examined leading private and public sector IT management practices to determine whether INS is effectively managing its IT investments and whether the Department of Justice (DOJ) is effectively promoting, guiding, and overseeing INS' investment management activities. GAO found that INS lacks the basic capabilities upon which to build IT investment management maturity. Furthermore, INS is not managing IT investments as a complete portfolio. By managing its IT investments as individual projects, INS will not be able to determine which investments contribute most to the agency mission. GAO also found that DOJ is not guiding and overseeing INS' investment management approach.
Ex-Im is an independent agency operating under the Export-Import Bank Act of 1945, as amended. Its mission is to support the export of U.S. goods and services, thereby supporting U.S. jobs. Ex-Im’s charter states that it should not compete with the private sector. Rather, Ex-Im’s role is to assume the credit and country risks that the private sector is unable or unwilling to accept, while still maintaining a reasonable assurance of repayment. Ex-Im must operate within the parameters and limits authorized by law, including, for example, statutory mandates that it support small business and promote sub-Saharan African and environmentally beneficial exports. In addition, Ex-Im must provide financing on a competitive basis with other export credit agencies and minimize competition in government-supported export financing and must submit annual reports to Congress on its actions. Ex-Im operates in several functional areas under the leadership of its President who also serves as Chairman of Ex-Im’s Board of Directors.Functional areas include the Office of the Chief Financial Officer, Office of General Counsel, Office of Resource Management, and Export Finance Group. The Export Finance Group is subdivided into business divisions that are responsible for underwriting related to loan guarantees, including processing applications, evaluating compliance of transactions with credit and other policies, performing financial analysis, negotiating financing terms, coordinating and synthesizing input to credit recommendations from other divisions, and presenting credit recommendations for approvals. Ex-Im offers export financing through direct loans, loan guarantees, and insurance. Ex-Im’s loan guarantees cover the repayment risk on the foreign buyer’s loan obligations incurred to purchase U.S. exports. Loan guarantees are classified as short, medium, or long term. Figure 1 highlights the basic differences between the three loan guarantee types. Further, while Ex-Im relies on the value of U.S. content as a proxy to evidence support for U.S. jobs, it also considers exporter-supplied data on jobs created or preserved. Because foreign content and local costs represent amounts that do not directly benefit the U.S. economy or U.S. employment, Ex-Im developed requirements that limit the extent to which Specifically, Ex-Im’s content such amounts are covered by its financing.policy includes criteria for identifying exports eligible for Ex-Im support based on the amount of foreign labor, materials, and overhead included in the production of the exported goods or services. Eligibility criteria and the amount of financing Ex-Im will provide based on content value depends on the terms of the financing requested and, for short-term transactions, whether the exporter is a small business. Ex-Im’s content policy is self-imposed and establishes (1) the level of financing available depending on the content of exported goods and services and terms of the transaction and (2) procedures for exporters to report content information to Ex-Im and for Ex-Im to assess the reasonableness of such information. When a loan guarantee transaction is approved, exporters are required to complete and provide an Exporter’s Certificate in which they certify the amount of foreign and domestic content included in the goods or services financed by Ex-Im. The certificate warns exporters of the federal penalties for making false statements. Short-term loan guarantees: Ex-Im’s short-term loan guarantee product is focused on working capital financing (hereinafter referred to as working capital loan guarantees). Working capital loan guarantees may be approved for a single transaction or a revolving line of credit that can be extended up to 3 years. In general, if the financed eligible product contains more than 50 percent U.S. content, then the entire transaction value is eligible for an Ex-Im working capital loan guarantee. Generally, Ex-Im guarantees 90 percent of the loan’s principal and interest if the borrower defaults. Therefore, the lender maintains the risk of the remaining 10 percent. Ex-Im’s payment of working capital claims is conditional upon the guaranteed lenders’ compliance with Ex-Im requirements, such as underwriting policies, deadlines for filing claims, payment of premiums and fees, and submission of proper documentation. Ex-Im has reported that over 80 percent of its working capital guarantee transactions are approved by lenders with delegated authority, which means that commercial lenders approve the guaranteed loans in accordance with agreed- upon underwriting requirements without first obtaining Ex-Im approval. If a lender does not have delegated authority, Ex-Im performs its own underwriting procedures and approves the guaranteed loans. Medium- and long-term loan guarantees: Financing eligibility for Ex-Im’s medium- and long-term loan guarantee support is limited to the lesser of (1) 85 percent of eligible goods and services in the U.S. export contract or (2) 100 percent of the U.S. content. Ex-Im’s medium- and long-term loan guarantees generally cover 100 percent of the loan’s principal and interest if the buyer defaults. Ex-Im’s guarantee to the lender is transferable, carrying the full faith and credit of the U.S. government, and is unconditional, meaning that Ex-Im must pay submitted claims regardless of the cause of default, as long as the claim is filed timely and no amendments were made without Ex-Im’s consent. The underwriting of medium-term loan guarantees is generally performed by Ex-Im and approved by certain Ex-Im officers with delegated authority. For about 3 percent of medium-term loan guarantee transactions, Ex-Im has provided certain lenders delegated authority to underwrite and approve these guarantees. The underwriting of long-term loan guarantees is performed by Ex-Im and approved by the Ex-Im Board of Directors. Ex-Im’s annual authorizations for direct loans, loan guarantees, and insurance increased from about $12 billion in 2006 to over $27 billion in 2013, an increase of about 125 percent. Over the same period, Ex-Im’s staff level, as measured by full-time employees, increased from 376 to 403, about 7 percent (see fig. 2). Of the annual authorizations for all Ex-Im financing products, as shown in figure 3, Ex-Im’s number of authorized transactions for working capital and long-term loan guarantees fluctuated from 2006 to 2013. However, the number of medium-term loan guarantee transactions decreased from 2006 to 2013. There were 533 working capital, 68 medium-term, and 73 long-term loan guarantee authorizations that totaled $14.9 billion, or 55 percent, of Ex-Im’s total annual authorizations in 2013. Although the number of working capital loan guarantees greatly exceeds the number of medium- and long-term loan guarantees, long-term loan guarantees account for the greatest dollar value of loan guarantees (see fig. 4). During the period 2006 through 2013, reported claim payments on defaulted loan guarantees have fluctuated and averaged $76.5 million. Such claim payments ranged from a high of $176.5 million in 2010 to lows of $19.3 million and $28.4 million in 2012 and 2013, respectively. Ex-Im’s Loan, Guarantee and Insurance Manual (Manual), which was updated in January 2013, February 2014, and March 2014, describes, among other things, Ex-Im’s underwriting procedures for long-term, medium-term, and working capital loan guarantees. The Manual describes the responsibilities of Ex-Im’s divisions (e.g., Transportation or Working Capital Finance) involved in the credit process, including application processing activities shown in figure 5. The underwriting of loans guaranteed by Ex-Im is performed by either Ex-Im loan officers or qualified lenders with delegated authority, which allows a lender to authorize a loan guaranteed by Ex-Im in accordance with agreed-upon underwriting requirements without first obtaining Ex-Im approval. All of the underwriting for long-term loan guarantees is performed by Ex-Im loan officers. According to Ex-Im officials, the underwriting process is essential to helping to prevent fraud because of the due diligence performed over the transactions and the transaction participants. Application intake. When an application is initially received, it is screened for basic completeness; follow-up on incomplete or unacceptable applications is performed; and once considered complete, it is assigned to a processing division. Application screening. After an application is determined to be complete, it is assigned to the applicable Ex-Im division that oversees the type of transaction. For example, an application for the purchase of an aircraft would be assigned to the Transportation Division. Once assigned, a loan officer in that division is to assess the eligibility of the transaction, and eligibility of the including eligibility of the export item, additionality,country where goods are to be shipped. To ensure compliance with laws and regulations, the loan officer is to obtain and assess various certifications from transaction participants, such as Iran Sanctions Certifications and antilobbying statements.required to submit the corporate and individual names and addresses of lenders, borrowers, guarantors, and other transaction participants to the Ex-Im Library. Library staff are then to conduct a Character, Reputational, and Transaction Integrity review—a procedure designed to, among other things, provide due diligence over various risks and to help prevent fraud by checking loan participants’ information with approximately 20 databases. Databases include various U.S. government and international debarment and sanction lists, as well as lists maintained by international financial institutions, the World Bank, and the Inter-American Development Bank. These lists contain the names and addresses of individuals and organizations that have been debarred or placed on a sanctions list by major international financial institutions because of their involvement in irregular, fraudulent, or corrupt acts. A credit report is a document created by a credit reporting agency that summarizes the financial history of a party (a person or a business). Examples of information found in a credit report include the amount of credit available to a party, how much of its credit limit a party tends to rely on for purchases, whether the party has a history of paying its bills on time, and whether a party has previously gone through bankruptcy. long-term loan guarantee transactions, except when the primary source of repayment is another government or a financial institution. If there are any exceptions to credit information requirements, credit standards, or Ex-Im policy, the loan officer is required to document the rationale for exceptional treatment. As needed, the loan officer obtains input from other Ex-Im staff, such as attorneys, economists, or engineers, to reach a conclusion regarding the legal, technical, or country risks and the level of environmental or social impacts of the proposed transaction. In addition, Ex-Im may utilize external financial, legal, and technical advisors to assist in the due diligence process. With the assistance from these other Ex-Im staff and external advisors, as needed, the loan officer is in a position to confirm the eligibility of exports for financing and the eligibility of the U.S. Ex-Im has established and foreign content included in the transaction.detailed policy on the amount of foreign content it will finance. Based on this due diligence, the loan officer is to assess the transaction for risk and assign an overall risk rating to the transaction. This rating is used, in part, to determine the exposure fee Ex-Im will charge the borrower for guaranteeing the transaction. Greater risks result in higher fees. Credit structure. After the risk assessment and due diligence is performed, the loan officer determines the financing terms and conditions to be recommended. The loan officer is generally required to structure the transaction to include a security interest (collateral) in the financed goods or other assets of the borrower. If it is determined that collateral is not necessary, the loan officer is to document the explanation and other mitigating factors to indicate this is acceptable to Ex-Im (e.g., Ex-Im support is small relative to borrower’s size). For all aircraft transactions, the loan officer is required to perform an assessment and loan-to-value analysis of the collateral,requirements for the borrower to maintain the ownership and condition of collateral. and the financing terms must include Credit decision. The loan officer is to document the due diligence performed, including any analyses performed by external advisors, in a credit or board memo, which also contains the loan officer’s recommendation to approve or decline the transaction. These memos and applicable supporting documentation are then to be forwarded to the approving party. The credit memo applicable to working capital or medium-term transactions is to be provided to Ex-Im officials with individual delegated authority to approve transactions of $10 million and Board memos for long-term transactions or transactions greater under.than $10 million or those within certain sensitive sectors (e.g., steel) are to be presented to the Ex-Im Board of Directors for consideration and approval. When the underwriting and credit decision is delegated to preapproved lenders, Ex-Im does not perform the underwriting procedures. However, Ex-Im’s established procedures call for delegated authority lender examinations to be performed at least annually to assess lenders’ compliance with Ex-Im’s underwriting standards. To conduct these examinations, an Ex-Im examiner selects a sample of the lender’s outstanding Ex-Im guaranteed loans to analyze.assess the lender’s credit underwriting practices to ensure they meet Ex- Im’s requirements and include due diligence and financial analyses. The examiner is required to review a loan’s application, authorization notice, credit memo, and loan agreement. Any exceptions to Ex-Im standards are to be noted in the examination report. Based on the seriousness of any exception(s), the lender receives either a Pass, Pass with Qualification, or Fail rating. Pass ratings indicate substantial compliance with only minor weaknesses; Pass with Qualification indicates overall adequate compliance except within a limited area of concern; Fail indicates a lack of compliance. A Pass with Qualification rating requires subsequent corrective action, while a Fail rating requires immediate suspension of delegated authority. Government-wide guidance for federal agencies to follow for the management and operation of a loan guarantee program include the following: OMB Circular No. A-129, Policies for Federal Credit Programs and Non-Tax Receivables, revised in January 2013, prescribes policies and procedures for designing and managing federal credit programs, including requirements for applicant screening, assessing creditworthiness, loan documentation, and collateral. Treasury’s Bureau of the Fiscal Service’s Managing Federal Receivables provides federal agencies with a general overview of standards, guidance, and procedures for successful management of credit activities, including requirements for evaluating and documenting loan applications. The Manual, which describes Ex-Im’s loan guarantee underwriting procedures, provided a framework for Ex-Im officials to implement Ex- Im’s underwriting process requirements for loan guarantee transactions, and Ex-Im implemented many key aspects of its underwriting process as required by the Manual. For example, Ex-Im consistently documented (1) transaction participant information, (2) due diligence over borrower creditworthiness and eligibility of costs, (3) loan guarantee terms and conditions, and (4) loan guarantee transaction approvals. However, the Manual did not adequately address the underwriting process in the following four areas: (1) application screening, (2) risk assessment and due diligence, (3) credit structure, and (4) credit decision. Specifically, the Manual did not include certain procedures that should be performed, or sufficiently detailed instructions on how certain procedures should be performed and documented, to reasonably assure compliance with Ex- Im’s requirements and consistency with federal guidance prior to loan guarantee approval. Further, Ex-Im did not have mechanisms to verify compliance with certain established procedures related to loan guarantee transactions prior to approval. In addition, while Ex-Im’s process for scheduling its delegated authority lender examinations was consistent with federal guidance, it was not documented or consistent with its established procedures. Ex-Im’s Manual generally provided loan officers guidance to screen application documentation and verify that loan guarantee transactions met certain eligibility criteria consistent with federal guidance. However, the Manual did not address the federal requirement that applicants must not be delinquent on federal debt to be eligible for financing. Further, the Manual did not include mechanisms to verify that certain application screening procedures were documented prior to loan guarantee approval, such as obtaining credit reports and documenting certain other eligibility procedures. We estimated that Ex-Im documented transaction participants, such as the applicant, borrower, guarantor, buyer, lender, end user, exporter, supplier, and guaranteed lender, for 100 percent of the medium- and long-term loan guarantee transactions. In addition, Ex-Im generally obtained from applicants and documented in the loan files the antilobbying statements, environmental screening documents, and Iran Sanctions Certifications, where applicable. The Department of Housing and Urban Development’s Credit Alert Verification Reporting System was developed in June 1987 as a shared database of defaulted federal debtors, and enables processors of applications for federal credit to identify individuals who are in default or have had claims paid on direct or guaranteed federal loans or are delinquent on other debts owed to federal agencies. The Do Not Pay List is a web-based single-entry access portal that federal agencies can use to gain access to several databases to assist in determining whether an individual or entity is ineligible, in part because of delinquent federal debt, to receive federal payments or engage in federal contracts or grants. verify that transaction applicants are not delinquent on federal debt helps assure applicant eligibility is consistent with federal guidance. Ex-Im did not consistently follow its established procedures for obtaining credit reports or did not document why the credit reports were not applicable. Specifically, we estimated that credit reports were not obtained, and Ex-Im did not document why they would not be applicable, for 61 percent of the medium- and long-term loan guarantees. Ex-Im officials told us that they did not always enforce the requirement to obtain credit reports because information in such reports could be stale or erroneous and did not heavily influence the overall eligibility determination. Additionally, loan officers told us that credit reports were not needed for aircraft transactions because of the extensive financial analyses performed during the underwriting of these transactions as well as the prior approvals of the applicants’ creditworthiness for previous transactions.longer-term financing, the applicant must provide 3 years of audited financial statements, as well as a thorough explanation of its business plan. Ex-Im officials stated that a loan officer’s due diligence provides information that is more extensive than that obtained through a review of a credit report. Nevertheless, Ex-Im did not have a mechanism to verify that credit reports were obtained in accordance with established procedures or that reasons for credit reports not being applicable were documented prior to loan guarantee approval. Obtaining credit reports for transaction participants, when available, could provide additional information to further enhance the loan officers’ knowledge about the financial condition of the transaction borrower and the risk associated with the proposed transaction. Ex-Im loan officers did not consistently implement established procedures for documenting the determination of certain other eligibility related information for loan guarantee transactions. Character, Reputational, and Transaction Integrity reviews. We estimated that Ex-Im documented its Character, Reputational, and Transaction Integrity reviews, including any identified issues and how they were resolved, as called for by Ex-Im’s established procedures, for 100 percent of the working capital loan guarantees. However, we estimated that about 4 percent of the medium- and long-term loan guarantee transactions did not include all documentation related to these reviews being performed. Export item eligibility and country eligibility. According to Ex-Im’s Manual, export item eligibility and country eligibility should be determined and documented during the loan guarantee application screening process as part of Ex-Im’s minimum eligibility requirements. However, while loan files we reviewed documented the export item and country involved in the transaction, we estimated that 39 percent of the medium- and long-term loan guarantee transactions did not have specific statements regarding the determination of export item, and we estimated that 41 percent of the medium- and long-term loan guarantee transactions did not have specific statements regarding the determination of country eligibility. GAO/AIMD-00-21.3.1. overreliance on this institutional knowledge could create inconsistencies in the underwriting process and could lead to transactions not being documented in accordance with established procedures. Further, when institutional knowledge is a part of the basis for loan guarantee approval, there is a risk that such knowledge could be lost if employee turnover takes place. Ex-Im did not have a mechanism in place to verify that these procedures related to eligibility requirements were performed and documented prior to loan guarantee approval. Consistently performing and documenting eligibility determinations helps ensure transaction and applicant eligibility. In September 2013, the IG reported, based on its review of Ex-Im’s underwriting of direct loans, that Ex-Im lacked documented support for statements regarding additionality (the justification for Ex-Im support). We observed a similar issue during our review of a sample of working capital and medium- and long-term loan guarantee transactions. In its September 2013 report, the IG recommended that Ex-Im update procedures for loan officers to maintain detailed documentation regarding the need for Ex-Im support. Ex-Im agreed with the IG recommendation and stated that the Ex-Im Manual would be updated to require loan officers to maintain detailed documentation regarding the need for Ex-Im support. As of July 2014, Ex-Im had updated the Manual for verification procedures over additionality for long-term transactions and Ex-Im officials stated that they are considering ways to further document additionality statements for working capital and medium-term transactions. We estimated that Ex-Im documented 100 percent of its financial analysis of the applicants’ creditworthiness, the eligibility of costs, the value of U.S. and foreign content, the budget-cost level, the engineering report, and environmental analyses, where applicable, for the working capital and medium- and long-term loan guarantee transactions. In addition, Ex-Im also documented in the loan files the economic impact analysis required by the Manual, where applicable. We also estimated that Ex-Im documented 100 percent of its due diligence for country risk, sovereign risk, political risk, financial institution risk, and nonfinancial institution risk factors for medium- and long-term loan guarantee transactions. Further, for 19 out of 19 aircraft loan guarantee transactions that we reviewed, Ex- Im documented the credit and legal strengths, weaknesses, and uncertainties related to the proposed transaction. Moreover, for 11 out of 11 medium-term loan guarantee transactions in excess of $1 million that we reviewed, Ex-Im documented the transaction risk classification in the loan file, as required by the Manual. However, loan officers did not document the analysis of country exposure, as required by Ex-Im’s established procedures, for an estimated 11 percent of the medium- and long-term loan guarantee transactions. Further, as noted above, credit reports were required for certain loan guarantee transactions. Also, Ex-Im’s Manual called for loan officers to document any mitigating factors for issues identified during the risk assessment and due diligence process. However, the Manual did not include detailed instructions for loan officers to use information in credit reports to enhance their financial analysis of transaction participants’ creditworthiness during the risk assessment and due diligence process. During our review of a sample of loan files, we noted that certain approved transaction participants’ credit reports contained issues, such as outstanding liens and poor credit history, yet the loan files did not contain documentation that these issues were identified by the loan officers or how these issues were mitigated. We inquired with Ex-Im loan officers about how these issues were mitigated, and they provided reasonable explanations. OMB Circular No. A-129 states that loan origination files should contain credit reports and credit analyses. Further, internal control standards require that controls and transactions be clearly documented and documentation be readily available for examination. However, Ex-Im did not have a mechanism in place to verify that the country exposure and information in credit reports were considered and documented during the risk assessment and due diligence process prior to loan guarantee approval. Ex-Im officials stated that the Credit Review and Compliance Division (CRC) directly monitors loan guarantee transactions by selecting and reviewing a random sample of loan guarantee transactions for compliance with Ex-Im’s established procedures. However, these reviews occur after the loan guarantees have been approved by Ex-Im and the funds have been disbursed by the lenders. Consistently performing and documenting reviews and key decisions related to risk assessments and due diligence helps ensure that Ex-Im’s financing is provided to applicants representing reasonable assurance of repayment. To be eligible for Ex-Im financing, exported goods and services must meet Ex-Im’s content requirements intended to ensure that U.S. jobs benefit from Ex-Im programs. In December 2013, the Ex-Im IG reported that only long-term loan guarantee transactions were subjected to procedures that could identify content-related discrepancies. Further, the IG reported that because of the lack of verification efforts and identified concerns regarding exporter certifications of content value, Ex- Im had limited assurance that content requirements are met and therefore that (1) Ex-Im finances only eligible exports and (2) its financing activities effectively achieve the agency’s mission of maintaining or increasing U.S. employment. The IG also noted that Ex-Im relied on criminal penalty warnings to deter exporters from making false statements instead of confirming foreign and domestic content for working capital and medium- term transactions. We also found that Ex-Im relied on exporter self- certifications of each export’s foreign and U.S. content information when performing its due diligence process for working capital and medium-term loan guarantee transactions. In its December 2013 report, the IG recommended that Ex-Im implement procedures to verify the accuracy of exporter self-certifications of content information for a representative sample of transactions each fiscal year. Ex-Im agreed with the recommendation, and in June 2014, procedures to verify a representative sample of exporters’ self-certifications each fiscal year were approved for use. Ex-Im officials stated that they are now in the process of implementing the procedures. Generally, Ex-Im’s Manual provided loan officers with detailed guidance for determining and documenting key credit structure components— financing terms and conditions, including collateral requirements—of a loan guarantee transaction. We estimated that Ex-Im implemented established procedures and documented 100 percent of certain credit structure components of the loan guarantee transactions, such as transaction participants, financed amount, repayment terms, and exposure fee. In addition, while Ex-Im generally identified collateral associated with its transactions, it did not consistently document an assessment of collateral prior to loan guarantee approval. Specifically, Ex-Im’s established procedures for nonaircraft medium- and long-term loan guarantees did not call for assessments of collateral prior to approval of the transactions as recommended by federal guidance, including OMB Circular No. A-129. We estimated that 37 percent of the medium- and long-term loan guarantee transactions did not have documentation to show that the identified collateral was assessed prior to loan guarantee approval. As stated in OMB Circular No. A-129, the government can reduce its risk of default and potential losses through well-managed collateral requirements for many types of loans during the application screening process. Collateral requirements were clearly defined in Ex-Im’s Manual for aircraft and working capital loan guarantee transactions, but these requirements were not clearly defined in Ex-Im’s Manual for nonaircraft medium- and long-term loan guarantee transactions. For example, procedures specific to aircraft transactions require an analysis of collateral and require that the ownership and condition of the collateral be maintained. Ex-Im documented detailed assessments of collateral for all long-term aircraft transactions we reviewed. Further, the procedures specific to working capital loan guarantee transactions state that the transactions must be fully collateralized at all times. The collateral for these transactions typically consists of export-related inventory, export- related accounts receivable, and export-related general intangibles. In February 2014, Ex-Im updated its procedures for collateral requirements. This additional guidance contained more details related to the identification of collateral; however, it did not include steps for an assessment of collateral as recommended by federal guidance. Ex-Im officials stated that lenders are usually required to provide evidence of obtained collateral in financed goods, which is to be reviewed by Ex-Im staff. However, this documentation was not maintained with underwriting documents, and loan officers generally told us that they did not rely on collateral during the underwriting process because it contributes to recoveries in the event of a default rather than to determining the reasonable assurance of repayment. Further, Ex-Im officials stated that since collateral is often negotiated and documented after loan approval, it may not be possible to assess collateral prior to approval. Ex-Im officials stated that the loan guarantees (including collateral) are monitored by Ex- Im’s Asset Management Division (AMD) once the loan guarantees become operative (approved by Ex-Im). However, having procedures for documenting an assessment of collateral prior to loan guarantee approval helps in the assessment of the overall financial risk of the proposed loan guarantee transaction. Ex-Im’s Manual described the tasks that need to be performed during the underwriting process and included credit memo templates to support a decision for various types of guarantees. However, the Manual did not include detailed instructions for preparation and inclusion of all required documents or analyses in a loan file prior to loan guarantee approval. We estimated that Ex-Im documented the approvals of 100 percent of all loan guarantee transactions with signatures on various loan guarantee documents. In addition, when lenders with delegated authority performed the underwriting for working capital loan guarantee transactions, we estimated that Ex-Im officials completed and signed the delegated authority checklist for 100 percent of the transactions. These checklists included transaction information such as the lender, borrower, loan amount, type of facility, letters of credit, warranties, disbursement date, primary and secondary collateral, and the terms of the loan. Although Ex-Im’s Manual described the tasks that needed to be performed during the underwriting process, it was not clear as to what documentation needed to be included in the loan file or when certain documentation was not required. Both OMB Circular No. A-129 and Treasury guidance state that loan origination files should contain loan applications, credit reports, credit analyses, loan contracts, and other documents necessary to conform to private sector standards for that type of loan. In addition, internal control standards state that all transactions and other significant events need to be clearly documented, and the documentation should be readily available for examination. Further, the standards state that all documentation and records should be properly managed and maintained. Detailed instructions help to reasonably assure that complete documentation is included in the loan file prior to loan guarantee approval, providing further support for the final credit decision. In September 2013, the IG reported on Ex-Im’s underwriting of direct loans and found that loan officers did not always complete various checklists intended to reasonably assure that (1) all required commitment documents are obtained prior to loan guarantee approval, (2) borrowers are eligible, and (3) loan applications are complete and comply with Ex-Im credit policies and standards. The IG found that loan officers generally relied on their institutional knowledge of Ex-Im operations when underwriting loans. We found a similar condition with regard to the underwriting of loan guarantee transactions. In its September 2013 report, the IG recommended that Ex-Im develop a systematic quality control review program or other mechanism(s) necessary to prevent, detect, and correct Ex-Im staff noncompliance with federal and agency credit program policy. Ex-Im agreed with the recommendation and stated that it would expand the scope of postauthorization reviews to assess compliance with federal and Ex-Im credit program policies. As of July 2014, Ex-Im had updated the Manual to include transaction assessments that call for standardized audits to evaluate transaction compliance with federal and Ex-Im credit program policies. In its September 2013 report, the IG also noted that Ex-Im’s record- keeping practices were inadequate. The IG noted that while the stated goal of Ex-Im’s Records Management Policy is the efficient and effective management of records and access, it did not address “how, where and by whom” loan documentation is to be maintained. Examples included records maintained by various individuals and divisions, requested files that could not be easily located, and disagreement on when and what files were to be transferred to Ex-Im’s central records facility. The IG noted that the lack of access to certain files hindered its ability to assess Ex-Im’s operations. We were similarly hindered in our testing of loan guarantee transactions. For example, we estimate that in 57 percent of the medium- and long-term loan guarantee transactions, all loan documentation was not kept in the loan file.report, the IG recommended that Ex-Im evaluate its record-keeping practices to identify operational risks and to develop and implement a plan to address deficiencies. Ex-Im agreed with the recommendation and stated that it would evaluate its record-keeping practices and implement improvements in this area to address any deficiencies and operational In its September 2013 risks. As of July 2014, Ex-Im officials stated that Ex-Im is in the process of developing an electronic application to address this recommendation. Ex-Im used a risk-based approach to schedule delegated authority lender examinations to verify that approvals made under delegated authority were done according to Ex-Im standards. However, this risk-based approach for scheduling these examinations was not documented. Ex- Im’s written procedures require examinations of its delegated authority lenders for working capital guarantee transactions at least yearly or more frequently if follow-up examinations are required to verify that corrective action is taken by the lender on any weaknesses found during the annual examination. However, Ex-Im officials stated that because of a lack of needed staff and resources, Ex-Im was using a risk-based approach to schedule the examinations, which was in accordance with OMB Circular No. A-129. Under this risk-based approach, lenders with less risk and less volume may have their examinations extended past the 12-month period. For example, for the 18 lenders that approved working capital loan guarantees in our review, Ex-Im performed annual examinations for 4 lenders, performed less frequent examinations for 12 lenders, and did not perform examinations for 2 of the lenders. Internal control standards call Documenting the risk- for internal controls to be clearly documented.based approach helps ensure consistent scheduling of the delegated authority lender examinations. While Ex-Im has taken steps to prevent, detect, and investigate fraud and officials could describe the steps they use, Ex-Im has not documented its overall fraud process, which is recommended by several authoritative auditing and antifraud organizations as a key step in evaluating and updating these processes. According to Ex-Im officials, the underwriting process is essential to helping to prevent fraud because Ex-Im loan officers undertake a series of assessments to evaluate transaction risk. Detecting fraud is a potential outcome of the processes Ex-Im uses to monitor guaranteed loans as well as the steps it uses in its claims and recoveries process once a loan defaults. Lenders also play a role in detecting fraud and are required to communicate information to Ex-Im about significant changes in the risk of a loan guarantee. When Ex-Im employees suspect fraud in a loan guarantee, they are to contact Ex-Im’s General Counsel (GC) or the IG with these concerns. Ex-Im’s GC compiles relevant information and evidence from sources within Ex-Im and, as appropriate, makes a formal referral to the IG, which is responsible for investigating allegations of potential fraud in Ex-Im’s portfolio of loan guarantees (see fig. 6 for an overview of Ex-Im’s fraud prevention, detection, and investigation processes). Ex-Im has not documented its overall process for fraud prevention, detection, and investigation, including mapping out the respective roles and responsibilities of Ex-Im divisions and lenders that are key participants in Ex-Im’s efforts in these areas. Auditing and antifraud organizations recommend documenting an organization’s fraud policy in part because clearly defined roles and responsibilities help an organization more coherently respond to the risk of fraud. In addition, according to these organizations, documenting an overall antifraud policy is also a key step in enabling an agency to evaluate and test its antifraud processes for effectiveness and to update these processes as appropriate. Furthermore, if an organization publicly communicates certain aspects of its fraud detection policy, it can create a deterrent effect. Likewise, internal control standards call for internal controls to be clearly documented and communicated to employees. As shown in figure 6 and discussed below, Ex-Im uses a variety of organizational divisions in its efforts to prevent, detect, and investigate fraud, as well as lenders and the IG. Ex-Im officials told us that Ex-Im had not yet documented its overall approach to fraud, including clearly describing the processes and roles and responsibilities of divisions within Ex-Im. These officials explained that after a large fraud scheme involving the San Antonio Trade Group was uncovered in 2008, Ex-Im focused on improving awareness of fraud across all of the divisions in Ex-Im so that its staff developed and exercised professional skepticism to stay aware of fraud indicators. Creating a strong control environment in which staff are aware of and committed to fraud prevention and detection is a best practice recommended by audit and fraud examination associations. Documenting the antifraud process, including the respective roles and responsibilities of various organizational units involved in the process, helps to create a sound control environment and to update the approach based on changes in technology, processes, and organization. In addition, documenting the roles and responsibilities of the organizational units helps to identify aspects of the process that could be strengthened. A key principle in any organization’s approach to proactively managing fraud risk is to develop effective fraud prevention techniques because preventing dollars from being paid fraudulently is more effective and efficient than attempting to recover these dollars once they have been paid.to helping to prevent fraud because Ex-Im loan officers undertake a series of assessments to evaluate risk and identify potential fraud indicators by reviewing financial, legal, and other documentation on key parties to a proposed loan guarantee. (See fig. 7 for an overview of fraud prevention processes.) Because the underwriting process involves loan officers reviewing applicant documentation, Ex-Im officials stated that the loan officers sometimes prevent fraudulent applications from being approved when they identify falsified documentation. For example, in one case, an Ex-Im loan officer noticed that the signatures on a new application were identical to the signatures on a previous application as were the submitted financials (for instance, the amount of reported rent was identical to that on a previous application). Ex-Im officials stated that if issues are identified during underwriting, the primary recourse is to deny the application, but loan officers can also report the problem to Ex-Im’s GC for further review and follow-up or to the IG for investigation. Ex-Im officials indicated that the underwriting process is essential Character, Reputational, and Transaction Integrity review is designed to identify, among other things, potential risks related to the possibility that parties to the transaction are not legitimate or have been involved with fraud, corruption, or other suspect practices. Before recommending approval of a loan guarantee, loan officers are required to submit the corporate and individual names and addresses of lenders, borrowers, guarantors, and other transaction participants to the Ex-Im Library. As noted earlier, library staff search a series of 20 databases that include various U.S. government and international debarment and sanction lists for any red flags. This practice is similar to searching major international financial institutions’ debarment lists—an approach recommended by the Organisation for Economic Co-operation and Development for export credit agencies. Credit review: Once a transaction is considered eligible for Ex-Im support, the loan officer reviews the creditworthiness of the borrowers involved in the proposed loan guarantee transaction, which can help prevent opportunities for fraud, according to Ex-Im officials. Collateral requirements: Ex-Im requires collateral for medium-term loan guarantee transactions, which officials told us was in response to higher levels of fraud in this type of loan guarantee. Since this requirement was introduced, officials told us they have seen a decrease in the frequency of fraud within medium-term loan guarantee transactions, which they believe can be attributed at least in part to the collateral requirements. Oversight of delegated authority lenders: Since Ex-Im does not perform underwriting for loan guarantees that fall under delegated authority, Ex-Im’s fraud prevention activities rely on the delegated authority lenders and Ex-Im’s oversight of these lenders. Ex-Im accomplishes this oversight through the delegated authority lender examinations. Fraud awareness training: In accordance with best practices from auditing and antifraud organizations, having a strong training program is another element of an effective fraud prevention strategy. Ex-Im officials reported providing fraud training to their staff using several different sources. Officials stated that the IG generally conducts fraud training for Ex-Im staff about every 18 months. For example, officials reported that the IG provided fraud awareness training to Ex-Im’s AMD in November 2013 on lessons learned through fraud investigations, money laundering trends, and economic and trade finance schemes. Ex-Im also leverages its internal knowledge of fraud by having its CRC provide agency-wide training on fraud tricks to be aware of. It also supplemented the training with a fraud tips and tricks memo—which lists common fraud indicators and their potential significance—that it provided to Ex-Im staff. Ex-Im staff also told us that a vendor was awarded a 5-year contract to offer formalized fraud training to include two classes each year for Ex-Im staff. This training occurred in January and June 2014. Ex-Im’s fraud detection process is a part of its overall monitoring of risk in its loan guarantee portfolio and the steps it uses during its claims and recovery process (see fig. 8). Ex-Im officials stated that once a loan guarantee is active, they use the following processes to monitor risk in Ex-Im’s portfolio and that these risk monitoring processes sometimes help officials detect fraud. Ex-Im’s CRC directly monitors loan guarantee transactions by selecting and reviewing a random sample of loan guarantee transactions for compliance with internal Ex-Im policies and procedures and ensures that the transactions occurred as outlined in the terms of the loans. According to Ex-Im officials, these reviews occur after the guarantees have been approved by Ex-Im and the funds have been disbursed by the lenders. Officials stated that these reviews have the potential to uncover irregularities and indicators of potential fraud—such as goods not being delivered or inappropriate documentation—which would result in staff communicating their concerns to Ex-Im’s GC or making a referral to the IG for further investigation. Ex-Im’s AMD monitors the credit and restructuring of medium- and long-term loan guarantees by annually reviewing each loan guarantee to determine if any changes in the risk rating are necessary. Officials stated that this review can create an opportunity to detect potential fraud. According to Ex-Im’s IG, which tracks the sources of referrals it receives, AMD has referred 64 cases of potential fraud to the IG for investigation since 2007. Ex-Im also tracks the status and probability of repayment of the loans it guarantees through lender reporting requirements, which Ex-Im officials noted provide another mechanism for Ex-Im to possibly detect fraud. Lender reporting requirements are outlined in the terms of a Master Guarantee Agreement—a document signed by Ex-Im and the lender to describe the responsibilities of each party. Specifically, lenders are required to inform Ex-Im of “material changes” in a guaranteed loan, meaning any changes a lender reasonably determines could materially and adversely affect the borrower’s ability to repay its debt, and that can stem from a poor business climate or potential fraud. For example, one lender contacted Ex-Im officials when a random audit, which the lender performed in 2006 as part of its own monitoring activities, found suspicious invoices for the sale of over $1 million of heavy equipment. Specifically, the lender found that an exporter provided false bills of lading—including one on which the vessel listed had never traveled to the port indicated. Also, the buyers listed on the false bills of lading claimed they never placed orders for the equipment. According to IG officials, this case was investigated by the IG, and in 2010, the owner of the export company was convicted of mail fraud. In addition, lenders indicated that they communicate with Ex-Im about changes to a loan guarantee that did not constitute material changes but represent causes for concern. For example, officials from one lender informed Ex-Im when they learned that an equipment company appeared to have a partial ownership in the exporter that the equipment company used to ship its product—a situation that can facilitate collusion by producing false shipping documents for products that were paid for through an Ex-Im guaranteed loan but were never actually shipped. When a guaranteed loan defaults, the guaranteed lender or exporter will typically file a claim with Ex-Im. As part of the claims process, Ex-Im’s Claims and Recovery Group typically drafts and sends a “demand for payment” letter to the delinquent borrower, and these letters sometimes elicit suspicious responses that may indicate defaults because of fraud. For instance, in response to a demand for payment letter, a borrower disputed Ex-Im’s claim and provided documentation that it had made timely payment to the lender through a wire transfer. Claims officials also discovered that the exporter had submitted three other claims for reimbursement to Ex-Im around the same time. Given the payment made by the buyer, Ex-Im officials concluded it was likely that the exporter submitted a fraudulent claim to Ex-Im and referred the exporter to Ex-Im’s IG for further investigation, which was ongoing as of July 2014. Ex-Im officials stated that the recovery process, in which Ex-Im determines the best options for maximizing the amount the defaulted party repays the lender, is the most common way that Ex-Im detects fraud. Because the recovery process involves activities such as physical reviews of the borrower’s assets and recovery officials going on-site to meet with the parties involved with the transaction, Ex-Im officials said that this process provides the most definitive opportunities to uncover discrepancies that indicate fraud. For example, in one case Ex-Im’s recovery officials determined that the exporter and buyer of a defaulted loan guarantee were family members and business-related documentation showed a likely relationship between the companies, such as having the same address for the borrower and exporter on bills of lading and websites. Ex-Im officials stated that the Claims and Recovery Group observes trends and patterns in claims that can aid Ex-Im in detecting fraud. For example, Claims and Recovery staff may observe a concentration of claims by certain buyers, exporters, or lenders or in certain geographic areas and industries (e.g., farming equipment, energy projects, or agricultural products such as seed). Officials described two instances in which country-based fraud patterns were identified by their staff. In the first instance, the staff reviewing claims realized that there was an increase in claims from the Philippines. Officials noted that the pattern was not immediately apparent because a number of different parties were involved in the claims but they considered the increase in claims from a single country to be a potential fraud indicator, and claims staff referred these claims to Ex-Im’s GC for investigation. In another case, claims staff noticed a spike in claims from a single lender for loans to buyers in a single country. Further review by staff from CRC uncovered other anomalies involving this lender. The reviews by both claims staff and CRC led to uncovering a fraud scheme involving the lender’s staff in that foreign country. According to Ex-Im officials, Ex-Im’s GC plays a role in fraud detection by collecting additional information on fraud concerns reported to GC by Ex- Im staff and developing formal referrals to Ex-Im’s IG when additional information indicates that such action is warranted.explained that if a preliminary assessment suggests that a crime may have occurred, they will make a formal referral to the IG. If the concerns do not rise to that level, GC will make an informal referral to share information with the IG. According to the IG, since 2007, Ex-Im’s GC has made 104 referrals for investigation to the IG of information that Ex-Im employees believed indicated potential fraud. If Ex-Im staff determined that fraud may be the underlying cause of a default, Ex-Im can work with the IG to leverage the IG’s investigative resources to pursue involved parties (see fig. 9). officials, determining whether a default occurred because of fraud can be complex—there is no single formula for determining whether a guaranteed loan defaulted because of fraud or for another reason, and this determination can take time. IG officials stated that many of the guaranteed loans that defaulted because of fraud may not be flagged until 6 months after a loan was disbursed, depending on the terms of the loan, because some loans do not require an initial payment for 6 months or longer. In addition, IG officials stated that sometimes parties that obtained financing through fraudulent means that were not detected during the underwriting process will actually make an initial payment or two in order to seem more legitimate and put additional time and space between them and a default and investigation. When a guaranteed loan defaults within 6 to 12 months between the first advance and first payment in the loan term, Ex-Im officials said that they generally consider the default to be a potential fraud situation because it is less likely that a loan would make it through the underwriting process and be extended only to have its business condition deteriorate so quickly that it would default early in the life of the guaranteed loan. The IG consists of both auditors, who review Ex-Im’s internal activities, and criminal investigators and analysts, who follow up on allegations of waste, fraud, and abuse. As of July 2014, the IG’s Office of Investigation was staffed with the Assistant Inspector General for Investigations, the Deputy Assistant Inspector General for Investigations, two special agents, an investigative analyst, and a financial analyst. Ex-Im’s Manual for underwriting provided a framework for loan officers to determine that only qualified applicants representing reasonable assurance of repayment were provided loan guarantees. While loan officers documented many key aspects of the underwriting process as required by the Manual, the process could be enhanced if certain procedures to be performed and sufficiently detailed instructions on how certain procedures should be performed and documented were included in the Manual. For example, the Manual did not include procedures for verifying that loan guarantee transaction applicants were not delinquent on federal debt, or for performing assessments of collateral for nonaircraft medium- and long-term loan guarantee transactions. In addition, mechanisms to oversee compliance with Ex-Im’s established procedures, including those related to documenting the determination of export item and country eligibility and documenting the analysis of country exposure, did not exist. Furthermore, while Ex-Im implemented a risk-based approach to delegated authority lender examinations, the approach was not documented. Improvements in these areas could help enhance the assessment of transaction participant eligibility and the reasonable assurance of repayment and the oversight of lenders, as well as help prevent fraud. Ex-Im also has not documented its overall process for preventing, detecting, and investigating fraud, including describing the roles and responsibilities of the divisions and officials within Ex-Im and of key participants in these processes. Doing so would aid in codifying institutional knowledge of these processes and facilitate communication about fraud for employees or lenders who may be new to Ex-Im. In addition, documenting these processes may be particularly important for Ex-Im because a number of divisions are involved in preventing, detecting, and investigating fraud, including but not limited to its AMD, Claims and Recovery Group, GC, and key participants such as Ex-Im’s lenders. Documenting the current fraud process is a key step in facilitating evaluations and testing the effectiveness of these processes. We recommend that the Chairman of the Export-Import Bank of the United States direct the appropriate officials to take the following six actions: Develop and implement procedures, prior to loan guarantee approval, for (1) verifying that transaction applicants are not delinquent on federal debt, including using credit reports to make such a determination, and (2) performing assessments of collateral for nonaircraft medium- and long-term loan guarantee transactions. Establish mechanisms to oversee compliance with Ex-Im’s existing procedures, prior to loan guarantee approval, for (1) obtaining credit reports for transaction borrowers or documenting why they were not applicable; (2) documenting certain eligibility procedures, including the Character, Reputational, and Transaction Integrity reviews for medium- and long-term loan guarantee transactions, export item eligibility, and country eligibility; and (3) documenting the analysis of country exposure. Develop and implement detailed instructions, prior to loan guarantee approval, for (1) preparing and including all required documents or analyses in the loan file and (2) using credit reports in the risk assessment and due diligence process. Update the Character, Reputational, and Transaction Integrity review process to include the search of databases to help identify transaction applicants with delinquent federal debt that would then not be eligible for loan guarantees. Document Ex-Im’s current risk-based approach for scheduling delegated authority lender examinations. Document Ex-Im’s overall fraud process, including describing the roles and responsibilities of Ex-Im divisions and officials that are key participants in Ex-Im’s fraud processes. We provided a draft of this report to Ex-Im for review and comment. In written comments on a draft of this report, which are reprinted in appendix II, Ex-Im concurred with our six recommendations. Ex-Im also provided technical comments that we incorporated into the final report, as appropriate. In its written comments, Ex-Im described planned actions to address our recommendations. For example, Ex-Im stated that it will develop and implement procedures for verifying that transaction applicants are not delinquent on federal debt. Ex-Im also stated that management is reviewing its current Character, Reputational, and Transaction Integrity process to assess the inclusion of the Do Not Pay List database. Additionally, Ex-Im stated that it would update its Manual to include current practices related to collateral assessments of nonaircraft medium- and long-term loan guarantee transactions, as well as potential enhancements. Further, Ex-Im stated that it will develop and implement instructions for loan officers regarding the preparation and inclusion of all required documents in a loan file and mechanisms to oversee compliance with Ex-Im’s existing procedures prior to loan guarantee approval. Ex-Im also stated that it would document its risk-based approach for scheduling delegated authority lender examinations and its fraud process, including a description of roles and responsibilities. If implemented effectively, Ex- Im’s planned actions should address the intent of our recommendations. We are sending copies of this report to appropriate congressional committees, the Chairman of the U.S. Export-Import Bank of the United States, and the Ex-Im Inspector General. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Steve Lord at (202) 512-6722 or lords@gao.gov or Gary Engel at (202) 512-3406 or engelg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objectives were to examine the extent to which the Export-Import Bank of the United States (Ex-Im) (1) adequately designed and implemented procedures to reasonably assure compliance with its underwriting process requirements for loan guarantee transactions and consistency with federal guidance and (2) adequately designed procedures to prevent, detect, and investigate fraudulent applications for loan guarantees. To assess Ex-Im’s design of procedures related to underwriting loan guarantee transactions, we reviewed relevant requirements and guidance, including the Office of Management and Budget’s Circular No. A-129, Policies for Federal Credit Programs and Non-Tax Receivables; the Department of the Treasury Bureau of the Fiscal Service’s Managing Federal Receivables: A Guide for Managing Loans and Administrative Debt; GAO’s Standards for Internal Control in the Federal Government; and the Bureau of Industry and Security’s “Know Your Customer” Lending Guidance. We also examined Ex-Im’s charter, credit framework, content policy, and other due diligence procedures. We compared the various requirements and guidance to Ex-Im’s Loan, Guarantee and Insurance Manual (Manual), which describes, among other things, Ex-Im’s procedures for assessing loan guarantee applications and preparing loan agreements. In addition, we reviewed Ex-Im’s Office of Inspector General (IG) reports since 2009 related to underwriting issues, various laws applicable to Ex-Im, and GAO reports related to Ex-Im. We also discussed underwriting requirements and the due diligence process with Ex-Im officials. For the implementation of the underwriting process, we identified key procedures from Ex-Im’s Manual. We also reviewed Ex-Im’s loan files used to document procedures performed during the underwriting process and Ex-Im’s financial statements from 2011 through 2013 along with other reports summarizing loan guarantee amounts. To select loan files to review, we obtained Ex-Im’s population of all authorized loan guarantee transactions from October 1, 2011, to March 31, 2013, which included data such as borrower, authorization amount and date, lender name, and product type. The total loan guarantee population contained 275 medium- and long-term loan guarantees totaling $21.9 billion and 792 working capital loan guarantees totaling $4.0 billion, which were authorized by Ex-Im from October 1, 2011, to March 31, 2013. To assess the reliability of data provided by Ex- Im, we analyzed information related to data elements and controls, reviewed the data for obvious errors in accuracy and completeness, compared data to published documents, and interviewed knowledgeable Ex-Im officials about the data. We concluded that the data elements we used were sufficiently reliable for purposes of selecting samples of loan guarantee transactions to review and for describing Ex-Im’s loan guarantee balances. Because of the differences in the underwriting process between the working capital loan guarantees and the medium- and long-term loan guarantees, we divided the population into two groups. The medium- and long-term guarantees were combined in one group while the working capital loan guarantees were placed in another group. For each sample, we made estimates for an attribute measure at the 95 percent level of confidence. From the sample design, we were able to conclude that the population error rate was less than 5 percent at the 95 percent level of confidence when no control violations were discovered in the sample. The final samples for the medium- and long-term transactions and the working capital transactions were 54 and 58, respectively. Because we followed a probability procedure based on random selections, each of our samples is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 7 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. Confidence intervals are provided along with each sample estimate in the report. The results from our samples apply to the universe of loan guarantees authorized from October 1, 2011, to March 31, 2013. Based on our review of the Manual and the key procedures we identified, we developed a data collection instrument for working capital loan guarantee transactions and another data collection instrument for medium- and long-term loan guarantee transactions, which we used to test the two samples extracted from the loan guarantee population. Working capital loan guarantees. We divided our working capital loan guarantee data collection instrument into delegated authority and nondelegated authority transactions. For each nondelegated authority working capital loan guarantee transaction in our sample, we reviewed the credit memo to determine if Ex-Im documented the following: credit reports on individual guarantor(s) or Dunn and Bradstreet reports if the guarantor was a business, analysis of contract and payment terms, financial analysis, bank references, collateral, and additionality. We also tested to see if exporter eligibility; Character, Reputational, and Transaction Integrity reviews, which included due diligence for Iran sanctions and other reputational risks; and appropriate signatures for approval from Ex-Im management were documented. For delegated authority working capital loan guarantees, we reviewed the selected transactions for exporter credit reports to determine if Ex-Im addressed any risks associated with the transaction participants and reviewed Dunn and Bradstreet reports on the exporter. We tested to see if all required information and approvals were contained in Ex-Im’s Application Processing System, including documentation of the type of loan facility, a product description, primary and secondary collateral, use of proceeds, export markets, export value, loan amount, additionality, and any environmental impacts. In addition, we tested to see if the loan amount on our sample was equal to the loan amount reported in the Application Processing System and that the transaction was properly classified as a working capital transaction. We also tested to see if an Ex-Im program management assistant completed and signed a delegated authority checklist for the borrower that contained information such as the lender, borrower, loan amount, type of facility, letters of credit, warranties, final disbursement date, primary and secondary collateral, and term of the loan. We also tested to see if an appropriate Ex-Im official signed the delegated authority checklist. For each of the delegated authority working capital loan guarantee transactions in our sample, we examined Ex-Im’s available examination reports for the applicable delegated authority lenders and tested to see if the approved loan guarantee transaction were within delegated authority lender levels. We requested the three most recent examinations conducted for the 18 lenders involved in the delegated authority transactions from our sample (47 out of the 58 items in our sample were done under delegated authority). For 12 lenders, we received three reports; we received one report for 4 of the lenders and no report for 2 of the lenders. Medium- and long-term loan guarantees. The key procedures relate to the following four underwriting activities: (1) Application screening. For this activity, we evaluated the documentation of the eligibility of loan guarantee participants and whether the information contained in the loan guarantee transactions met minimum eligibility criteria standards. We tested to see if the sample items contained complete transactional party information, including applicant, borrower, guarantor, buyer, lender, end user, exporter, supplier, and guaranteed lender. We determined if the eligibility of the export item was clearly stated or could be clearly assumed in the sample documents. We also tested to see if additionality, country eligibility, and any applicable State Department clearance were recorded. We tested to see if Ex-Im’s Credit Review and Compliance Division documented the Character, Reputational, and Transaction Integrity review process prior to loan guarantee approval which included the full request, the participants screened list, and the results of the Ex-Im library’s research and determined if issues identified in this review were resolved by viewing the board memo for each long-term loan guarantee transaction and the credit memo for each medium-term loan guarantee transaction. For all board- approved transactions, we also tested to see if the Iran Sanctions Certification was obtained. For all medium- and long-term loan guarantee transactions, we tested to see if a credit report, a Dunn and Bradstreet report, or both dated within 6 months of the application date, were obtained and maintained or, if not, whether Ex-Im documented why these reports were not applicable. Further, we tested to see if the loan guarantee application contained an antilobbying statement and an Environmental Screening Document. (2) Risk assessment and due diligence. For this activity, we evaluated the documentation related to financial risk factors and the commercial viability of the loan guarantee transaction. We tested to see if the engineering memo for long-term loan guarantee transactions and the credit memo for medium-term loan guarantee transactions documented the eligibility of costs and the value and eligibility of U.S. and foreign content. For long-term loan guarantee transactions, we tested to see if Ex-Im documented the environmental impact in the engineering report. For medium-term loan guarantee transactions, we tested to see if the Vice President of Ex-Im’s Engineering and Environment Division conducted an environmental impact analysis. For long-term loan guarantee transactions that were authorized after May 30, 2012, we tested to see if a summary of economic impact analysis was included. We also tested to see if Ex-Im documented its due diligence over certain applicable risks and the mitigation of those risks. These risks included country exposure and technical and environmental risk for long-term loan guarantee transactions. In addition, we tested to see if Ex-Im performed due diligence for sovereign risk, political risk, financial institution risk, and nonfinancial institution risk factors for both medium- and long-term loan guarantee transactions. We also reviewed credit and board memos to determine if Ex-Im summarized risks included in credit reports and the mitigating factors for those risks. We further reviewed loan files to determine if the overall budget cost level for both medium- and long-term loan guarantee transactions were documented. For aircraft transactions, we tested to see if the credit and legal strength, weakness, and uncertainties analyses were completed. For medium-term loan guarantee transactions in excess of $1 million, we tested to see if the transaction risk classification was assessed and documented in the loan file. We tested the appendixes of the credit memo for long-term loan guarantee transactions to see if the financial assessment, engineering report, country risk assessment, risk rating summary, economic analysis, and any special conditions were documented. (3) Credit structure. For this activity, we evaluated the documentation of the financing terms and conditions, including collateral to be recommended for the loan guarantee transaction. For medium- and long- term loan guarantee transactions, we reviewed the loan file to determine if Ex-Im documented the exposure fee calculator worksheet; the repayment terms, to ensure that the terms did not exceed the useful life of the collateral or scope of supply; the long-term action sheet; and the medium-term decision memo. We tested to see if collateral for the loan guarantee had been identified and assessed and to see if the loan agreement included requirements for the borrower to maintain the collateral, where applicable. (4) Credit decision. For this activity, we evaluated the approved amount, proper categorization of the loan guarantee, and the appropriate records storage for the loan documentation. For long-term and aircraft loan guarantee transactions, we tested to see if there was approval by an Ex- Im loan officer, an Ex-Im lawyer, and the Vice President of Trade Finance on the credit memo. For medium-term loan guarantee transactions, we tested to see if the credit memo was signed by the appropriate official. For long-term loan guarantee transactions, we tested to see if approval signatures on the board memo were documented. For all selected loan guarantee transactions, we tested to see if the actual loan amount per Ex- Im’s Application Processing System was equal to the loan agreement and equal to or less than the financed amount approved. We also tested to see if the loan agreement was signed by all applicable parties and that the transaction was approved by the appropriate authority level. In addition, we tested to see if the loan was properly categorized as medium or long term and that the loan documentation was kept in the appropriate records storage location. For any unresolved issues found on both the working capital and medium- and long-term loan guarantee data collection instruments, we followed up with Ex-Im officials to evaluate any additional supporting documentation not originally included in the loan files. Additionally, we interviewed Ex-Im loan officers, directors, and other officials to obtain further perspective on the underwriting process of loan guarantee transactions. To evaluate whether Ex-Im adequately designed procedures to prevent, detect, and investigate potential fraud in loan guarantee transactions, we identified key procedures by reviewing Ex-Im’s established procedures and interviewing officials from the Ex-Im IG. We also interviewed lenders and Ex-Im officials to identify steps Ex-Im has taken to strengthen lender fraud procedures. We evaluated the federal standards and, as appropriate, industry standards and best practices for preventing, detecting, and investigating fraud. We reviewed Ex-Im’s practices for verifying that debarred lenders and borrowers have not been authorized to receive loan guarantees. To assess Ex-Im’s implementation of procedures to prevent fraudulent applications from being approved, we reviewed the same sample of loan guarantee transactions described above to determine whether Ex-Im effectively implemented its procedures, such as reviewing and evaluating documentation and implementing its Character, Reputational, and Transaction Integrity check, to detect and prevent fraudulent applications from being approved during the underwriting process. To identify and evaluate Ex-Im’s processes for detecting potential fraud among active loan guarantee transactions, we interviewed officials about current practices and reviewed documentation of cases showing how Ex-Im detected potential fraud under a variety of circumstances. We conducted this performance audit from March 2013 to September 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contacts named above, Marcia Carlsen (Assistant Director), Joah Iannotta (Assistant Director), Jeanette Brahame, Michael Bird, Katy Crosby, Dennis Fauber, Natasha Guerra, Cole Haase, Debra Hoffman, Dragan Matic, and Carroll Warfield Jr. made key contributions to this report.
Ex-Im serves as the official export credit agency of the United States, providing a range of financial products to support the export of U.S. goods and services. Following the 2007-2009 financial crisis, increased demand for Ex-Im support resulted in significant increases in Ex-Im's outstanding financial commitments and risk exposure, which heightened interest in ensuring that Ex-Im has procedures in place to minimize financial risks. GAO was mandated by the Export-Import Bank Reauthorization Act of 2012 to review the extent to which Ex-Im (1) adequately designed and implemented procedures to reasonably assure compliance with its underwriting process requirements for loan guarantee transactions and consistency with federal guidance and (2) adequately designed procedures to prevent, detect, and investigate fraudulent applications for loan guarantees. To address these objectives, GAO (1) reviewed Ex-Im's relevant procedures and federal guidance; (2) conducted tests on statistically random samples of loan guarantees authorized between October 1, 2011, and March 31, 2013; and (3) interviewed Ex-Im officials. The Export-Import Bank's (Ex-Im) Loan, Guarantee, and Insurance Manual (Manual) describes Ex-Im's underwriting procedures and generally provides loan officers with a framework to implement its underwriting process requirements for loan guarantee transactions. GAO's review of a statistical sample of loan guarantees indicated that Ex-Im implemented many key aspects of the underwriting process as required by the Manual. However, the Manual did not (1) include certain procedures or sufficiently detailed instructions to verify compliance with Ex-Im's requirements and consistency with federal guidance, such as a procedure to verify that loan guarantee transaction applicants did not have delinquent federal debt; (2) include instructions for loan officers to use credit reports and for the inclusion of all required documents and analyses in the loan file prior to approval; and (3) call for assessments of collateral, as required by federal guidance, for certain loan guarantee transactions prior to approval. Further, Ex-Im did not have mechanisms to verify compliance with certain established procedures, including documenting certain loan guarantee eligibility procedures. In addition, Ex-Im's current risk-based approach for scheduling examinations to monitor lenders with delegated authority to approve guaranteed loans was not documented. Improvements in these areas help enhance the assessment of transaction participant eligibility and the reasonable assurance of repayment, as well as help prevent fraud. While Ex-Im has processes to prevent, detect, and investigate fraud, Ex-Im has not documented its overall processes for doing so. Such documentation is recommended by several authoritative auditing and antifraud organizations as a key step in evaluating and updating these processes. The processes Ex-Im used to prevent and detect fraud were part of its underwriting and monitoring of loan guarantees. A number of divisions within Ex-Im, as well as lenders, played a role in preventing fraudulent applications from being approved and monitoring activity that could help detect potential fraud. If a guaranteed loan defaults and an indicator of fraud existed, staff would work with Ex-Im's Office of Inspector General to leverage its investigative resources to pursue involved parties. GAO is making a number of recommendations to Ex-Im to enhance its loan guarantee underwriting process with additional procedures for ensuring compliance with Ex-Im and federal requirements, as well as for documenting fraud processes. In commenting on GAO's draft report, Ex-Im concurred with GAO's recommendations.
Long-term care services assist people who need help in performing activities of daily living (ADLs), such as eating, bathing, and dressing. As some of these services can be expensive, especially services provided in a nursing home, long-term care insurance helps people pay for the cost of care. However, relatively few people have obtained long-term care insurance through products sold in the individual and group markets. To help federal employees, retirees, and others obtain coverage, the federal government began offering the opportunity to apply for long-term care insurance in 2002 through an employer-sponsored group program in which enrollees pay the entire cost of their premium. Long-term care refers to a range of support services provided to people who, because of illness or disability, generally are unable to perform ADLs for an extended period. Long-term care services include medical, social, and personal services. Care may be provided in various settings, including facilities such as nursing homes or assisted living facilities, a person’s own home, or the community. Both paid and unpaid caregivers may provide long-term care services. As a person ages, his or her ability to perform basic physical functions typically declines, increasing the likelihood that he or she will need long-term care services. People can purchase long-term care insurance directly from carriers that sell products in the individual market or can enroll in products offered by employer-sponsored and other groups. Long-term care insurance policies sold in the individual and group markets cover costs associated with long- term care. For a specified premium amount that is designed—but not guaranteed—to remain level over time, the carrier agrees to provide covered benefits under an insurance contract. First sold in the 1970s, long- term care insurance has evolved from initially offering coverage for nursing home care only to offering comprehensive coverage. Comprehensive coverage pays for care provided in facilities such as nursing homes and other settings such as a person’s home. Insurance is generally purchased for defined daily benefit amounts and benefit periods, with elimination, or waiting, periods. For example, long-term care insurance might provide coverage at $100 per day for care provided in a nursing home or in other settings for 3 years after a waiting period of 90 days. Because long-term care insurance claims might not be filed for many years after the product is purchased, the insured can purchase protection against inflation, which can increase the daily benefit amount covered. In addition, long-term care insurance products can (1) cover home care at varying percentages of the daily benefit amount; (2) offer people a range of other types of options, such as policies that return a portion of the premium payments if the person dies; and (3) include selected benefits, such as international coverage or care-coordination services that, among other things, provide information about long-term care services to the enrollee and monitor the receipt of services. Many factors affect long-term care insurance premiums. Carriers charge higher premiums for richer benefits; for example, higher daily benefit amounts, longer benefit periods, and higher levels of inflation protection will increase the cost. Premiums are based on the age of the applicant, with premiums increasing more rapidly as age increases. Premiums are also based on the health status of the applicant. Most carriers selling coverage in the individual market assign applicants to one of three general rating categories based on health status when underwriting the coverage— preferred, standard, or substandard—with associated discounts and surcharges. In addition, carriers in the individual market usually offer discounts to married couples when both spouses purchase coverage. Products sold in the group market may be sold on a guaranteed issue basis during an open enrollment period, with no or limited underwriting for employees actively at work who enroll through an employer-sponsored program, and the products generally do not provide discounts for spouses. Carriers cannot increase a particular person’s premiums but can increase premiums for a group of people who bought the same type of policy when the carrier can demonstrate that anticipated costs will exceed premium revenue. Carrier pricing assumptions, including projected interest rates, morbidity or illness rates, and lapse rates—the number of people expected to drop their policies over time—all affect premium rates and rate setting. Carriers estimate the total amount of premiums to be collected for long- term care insurance policies sold as well as projected claims and administrative costs for these policies using an anticipated lifetime loss ratio. This ratio describes what portion of total premiums is expected to be paid for claims for the reimbursement of the costs of long-term care over the life of a set of policies. The portion of premiums not spent on claims is used to pay for administrative costs, such as marketing, agent commissions, claims handling, overhead, and taxes, and for profits. In the past, National Association of Insurance Commissioners (NAIC) model regulations for long-term care insurance stated that carriers should spend a minimum of 60 percent of collected premiums on claims. However, in model regulations released in August 2000, NAIC recommended that carriers price their products high enough initially to prevent the need for future rate increases rather than target a minimum percentage to be spent on claims. So far, according to NAIC, a majority of states have adopted long-term care insurance regulations based on the 2000 NAIC model, while some states still require minimum loss ratios. Many large carriers set premium rates on a national rather than regional basis. Carriers also price to cover a profit margin and administrative costs as well as to meet minimum loss ratios. Few claims are expected to be submitted during the early years of a long- term care insurance policy. As a result of underwriting, it is unlikely that many people could meet the eligibility requirements to buy the policy yet submit a claim within 3 years. Industry experts suggested that the effects of underwriting begin to decline and the rate of claim submissions starts to increase after about 3 to 7 years. The rate of increase in claim submissions depends on the average age of the enrollees, with most long-term care insurance claims submitted when people reach their mid-70s to mid-80s. Industry experts also noted that the rate of claim submissions in the federal program is expected to peak 25 years or more after the program began. Because the average age of enrollees in the individual market is higher than the average age of enrollees in the federal program, the rate of claim submissions is likely to peak earlier in the individual market, after 15 to 25 years. The rate of claim submissions is likely to peak after 30 to 40 years in the group market because of the younger average age of its enrollees. The Health Insurance Portability and Accountability Act of 1996 (HIPAA) specified conditions under which long-term care insurance benefits and premiums would receive favorable federal income tax treatment and provided specific protections to people who purchased tax-qualified plans. Long-term care insurance plans must meet certain requirements contained in HIPAA to be considered tax-qualified. For example, according to HIPAA, a plan must begin coverage when a person is certified to need substantial assistance with at least two of the six ADLs and a disability is expected to last 90 or more days, or to need regular supervision because of a severe cognitive impairment. In addition, federally tax-qualified plans must comply with the NAIC long-term care insurance model act and regulations in effect as of January 1993, as incorporated into HIPAA. These provide certain consumer protections, such as preventing a carrier from (1) not renewing a long-term care insurance policy because of age or deteriorating health and (2) increasing the premium of an existing policy because of a person’s age or claims filed. Another consumer protection was that carriers had to offer inflation protection as specified in the NAIC model regulations. Each state establishes its own long-term care insurance laws and regulations that cover areas such as benefits, premium setting, and consumer protections. As a result, product requirements in the individual and group markets can vary among the states. According to NAIC, 41 states based their long-term care insurance regulations on the NAIC model, 7 based their regulations partially on the model, and 3 did not follow the model. The number of long-term care insurance policies sold has been small— about 9 million as of 2002, the most recent year for which data were available. About 80 percent of these policies were sold through the individual insurance market and the remaining 20 percent were sold through the group market. In March 2005, 13 percent of full-time workers in private industry had access to employer-sponsored long-term care insurance benefits; within private industry, 21 percent of workers in large establishments with 100 or more workers had access to this benefit. People purchase policies from carriers in the individual market, usually through agents or brokers, and choose their own benefits from among a range of options the carriers offer. Groups—for example, employers, associations, or unions—purchase policies from carriers in the group market. Groups usually design the benefits, and enrollees are often given some benefit options from which to choose, for example, differing daily benefit amounts and benefit periods. However, benefit choices offered in employer-sponsored group products tend to be more limited than those that are available in the individual insurance market. Some groups offer benefit packages in which the benefit options are predetermined. In contrast to health insurance, where employers often contribute a share of the premium costs, enrollees in group long-term care insurance coverage usually pay the entire premium. A recent downturn in the long-term care insurance industry has led to more conservative assumptions when setting premiums and consolidation among carriers. The long-term care insurance industry experienced 18 percent annual growth in the number of policies sold from 1987 through 2002, but the industry has experienced a downturn in more-recent years. Beginning in 2003, many carriers in the individual market raised premiums, left the marketplace, or consolidated to form larger companies. This activity occurred in response to several factors including high administrative expenses relative to premiums; lower-than-expected lapse rates, which increased the number of people likely to submit claims; low interest rates, which reduced the expected return on investments; and new government regulations limiting direct marketing by telephone. Many carriers revised the assumptions used in setting their premium rates, taking a more conservative approach that led to higher premiums, while state regulators increased their oversight of the industry. Currently, several large carriers dominate the coverage sold in the individual and group markets as a result of mergers and acquisitions, and sales in the group market are growing faster than in the individual market. The federal government began offering group long-term care insurance benefits in 2002 for federal employees, retirees, and certain other people. When the Federal Long Term Care Insurance Program began, eligible people could apply for benefits during two specified time periods: (1) an early enrollment period for benefit options that were somewhat limited that ran from March 25, 2002, through May 15, 2002, intended for people who were well-informed about long-term care insurance and were eager to enroll in the federal program and (2) an open enrollment period for all benefit options that ran from July 1, 2002, through December 31, 2002. Active and retired federal and Postal Service employees, active and retired members of the uniformed services, qualified relatives, and certain others are eligible to apply for federal long-term care insurance benefits. Following the open enrollment period, eligible people could apply at any time. The federal program determines eligibility for long-term care insurance through underwriting. During the early and open enrollment periods, the program used an abbreviated underwriting application to determine eligibility for active employees and active members of the uniformed services and their spouses who applied. All other applicants, including retirees and qualified relatives, used the full underwriting application, which was similar to underwriting in the individual insurance market. Since the conclusion of the open enrollment period, newly hired federal and Postal Service employees and newly active members of the uniformed services who apply for long-term care insurance within 60 days of employment can do so using an abbreviated application, as can their spouses. All other applicants must use the full underwriting application. The federal program offers four prepackaged plan designs, each with a 90-day elimination period, a choice of two types of inflation protection— either automatic compound inflation protection or the future purchase option—and the following benefit options: Package 1—$100 daily benefit amount for comprehensive coverage and Package 2—$150 daily benefit amount for comprehensive coverage and Package 3—$150 daily benefit amount for comprehensive coverage and an unlimited benefit period, and Package 4—$100 daily benefit amount for facilities-only coverage and 3-year benefit period. If not choosing a prepackaged plan, an enrollee in the federal program also has several options for customizing benefits. The federal program was designed to comply with the NAIC model regulations and HIPAA tax- qualification standards, which specify that certain benefit options be offered. (App. II provides more information on federal long-term care insurance benefits and enrollment in these benefits.) The federal program provides reimbursement for costs of care when an enrollee is unable to perform at least two ADLs for an expected period of at least 90 days or needs substantial supervision because of a severe cognitive impairment. Reimbursement is based on the benefits chosen by the enrollee. The federal government does not contribute to the cost of coverage, so an enrollee pays the entire premium for the benefits chosen. OPM, rather than the states, regulates the federal program, and Partners administers the program in accordance with the requirements of the contract between OPM and Partners. The contract—which was signed on December 18, 2001, and extends for a period of 7 years—defines key administrative requirements including who controls program assets and how profits are determined. The contract requires that the parent companies of Partners—John Hancock Life Insurance Company and Metropolitan Life Insurance Company—must hold federal program assets in accounts separate from all their other businesses. At the end of the contract period, OPM may decide to enter into a new contract with Partners. However, if OPM selects a different contractor at that time, the financial assets of the federal program would be transferred to the new contractor. The contract also specifies how Partners earn a profit each year. The profit formula consists of two parts: (1) some of the profit is capped at 6.5 percent of premiums collected in a year—nearly half of this type of profit is subject to performance criteria while the rest is guaranteed—and (2) some of the profit is based on the performance of the total assets of the federal program being managed by Partners to pay future claims—this type of profit consists of 0.3 percent of the total assets, called a “risk charge.” Partners must pay federal taxes on their total profit, but may charge other taxes to the federal program. Partners also collects investment-management fees that are less than 0.2 percent of total assets. No profit is allowed if the premiums are not sufficient to cover claims and expenses. While OPM expects premium rates to remain level over an enrollee’s lifetime, Partners may raise or lower premiums for groups of enrollees during the contract period with OPM’s agreement. Additionally, premium rates may be changed at the time of a new contract. The federal program offered benefits that were similar to those of other long-term care insurance products we reviewed and usually offered lower premiums for comparable benefits in individual products. While federal program enrollees could choose from many options to customize their benefits, a broader range of options was available in the other products we reviewed, especially in the individual products. However, despite the broader range of options available in the other products, most enrollees in the federal program and in individual and group products chose similar daily benefit amounts, elimination periods, and benefit periods. A greater percentage of federal enrollees chose automatic compound inflation protection compared with enrollees in other products. Overall, annual premiums in the federal program averaged across three benefit plan designs were lower for both single people and married couples who were both the same age compared with similar individual products sold on March 31, 2005. Moreover, of total premiums projected to be collected over the life of the coverage sold during the study period, the federal program expected to pay a higher percentage in claim payments and a lower percentage in administrative costs compared with individual and group products. Long-term care insurance benefit options were similar in the federal program and in individual and group products we reviewed. While the federal program offered over 500 possible benefit option combinations in addition to the four prepackaged benefit plans, other products, especially individual products, offered more possible benefit combinations and more extensive customization in daily benefit amounts and in elimination and benefit periods. Benefits offered in the federal program and in the individual and group products we reviewed were covered by consumer protections required for HIPAA tax-qualified plans. These consumer protections included, among other provisions, that enrollees be offered an option to protect their benefits against inflation. However, some individual and group product enrollees were also offered the opportunity to purchase policies that did not meet requirements for HIPAA tax-qualified plans. In addition, according to officials at OPM and Partners, the federal program offered several unique benefits, including payment to family members providing informal care, international coverage, and a process allowing third-party review of denied claims. Table 1 summarizes the benefit options in the federal program and in the individual and group products we reviewed. Most enrollees in the federal program chose comprehensive coverage and daily benefit amounts, elimination periods, and benefit periods similar to those chosen by enrollees in individual and group products. Most enrollees in all products chose to have the cost of long-term care services reimbursed at a rate in the range of $100 to $199 per day and chose an elimination period of 90 days or greater and a benefit period from 3 to 5 years. With regard to inflation protection, over two-thirds of federal program enrollees chose automatic compound inflation protection, a higher proportion than for enrollees in individual and group products. Federal enrollees who did not choose automatic compound inflation protection received the future purchase option as a default. Several experts and industry officials said the federal government was a leader in the group market by encouraging enrollees to choose more comprehensive inflation-protection benefits. Table 2 summarizes the benefit options chosen by enrollees in the federal program and in individual and group products. Federal Long Term Care Insurance Program annual premiums were usually lower than annual premiums for individual products for three benefit packages with similar benefit options sold on March 31, 2005. Overall, the average premium in the federal program for the three benefit packages for single people was 46 percent lower than average premiums for individual products we reviewed, while premiums for married couples who were both the same age were 19 percent lower. However, the premium estimates reported for individual products do not include discounts for good-health status, which several carrier officials said were about 10 percent to 15 percent and apply to about one-third of all enrollees. Figure 1 compares average annual premiums for three benefit packages and overall for the federal program with average annual premiums for the individual products we reviewed at five carriers. The pattern of lower premiums in the federal program compared with those in the individual products remained consistent in different age groups as well as for single people and married couples. Figure 2 shows the range in annual premiums in the individual products we reviewed relative to annual federal premiums for single people and married couples who were both the same age, by age group, for the most popular federal comprehensive benefit package. Appendix III provides more information on annual federal and individual product premiums for the benefit packages we reviewed. When compared with premiums offered by CalPERS, the only group product for which we had premium information, average federal premiums were higher for two of the three packages. Overall, the average annual federal premium for single people and for married couples who were both the same age for the three benefit packages combined was 3 percent higher than the average annual premium for CalPERS. As measured by the anticipated lifetime loss ratio, the federal program expects to spend a higher proportion of collected premium on claims and a lower proportion of collected premium on administrative costs than individual and group products. The Federal Long Term Care Insurance Program had a higher anticipated lifetime loss ratio than the average anticipated lifetime loss ratios for the individual and group products we reviewed—75 percent for the federal program, compared with 59 percent for individual products and 68 percent for group products. The federal program expected to pay out in claim payments three-quarters of the $3.1 billion in premiums it projected would be collected over the life of the policies for all policies sold from March 25, 2002, through March 31, 2005. The federal program expected to spend the remaining amount of collected premiums—25 percent—on administrative costs, including marketing, underwriting, claims handling, overhead, and taxes, and on profits. For individual products sold during July 1, 2002, through March 31, 2005, individual market carriers estimated that an average of 41 percent of total premiums collected would cover administrative costs and profits. Unlike the federal program, these administrative costs included agent commissions, which averaged 17 percent of premiums collected for the individual products we reviewed, or about half of their administrative costs. For group products, carriers estimated that an average of 32 percent of total premiums collected for coverage sold during this time period would cover administrative costs and profits. The employee participation rate in the Federal Long Term Care Insurance Program for active federal civilian employees was 5 percent, comparable to the industry average in the group market, but overall enrollment was lower than the expectations established by Partners. Federal enrollees were younger than enrollees in individual products and older than enrollees in group products. For all products we reviewed, more women than men obtained coverage. The Federal Long Term Care Insurance Program’s employee participation rate of 5 percent after the open enrollment period for active federal civilian employees was comparable to the industry average in the group market. Experts suggested that typically 5 to 6 percent of a group’s potentially eligible population would enroll in a long-term care insurance product when initially offered coverage. Participation rates tend to increase over time, usually reaching closer to 8 percent, depending upon the average age of the eligible population. The federal program’s employee participation rates were much lower for active military members, at 0.2 percent, and for active Postal Service employees, at 0.9 percent. Active military members are young and can be difficult to reach for marketing purposes, which might explain why they were less likely to apply than active federal workers, who tend to be older and can be reached directly through their workplace. Active Postal Service employees are also difficult to reach because they are located throughout the nation rather than grouped in centralized locations, and because access to these employees for marketing purposes has been restricted during working hours. The federal program was the largest employer-sponsored group in the nation, with more than 218,000 individuals enrolled for new policies sold from March 25, 2002, through March 31, 2005. The next largest group program was CalPERS, with more than 175,000 enrollees for policies sold from 1995 through 2005. According to Partners, the federal program accounted for 15 percent of the enrollees in the entire group market and 2 percent of the entire long-term care insurance market in 2002. Even though it was the largest group in the nation, the federal program’s enrollment was lower than expected. Partners initially estimated in 2001 that 286,066 people would enroll during the open enrollment period, but actual enrollment was 161,048, or 44 percent lower than expected. Partners also estimated that enrollment would reach 343,280 by the third year of the program; total enrollment eventually rose to 218,890 enrollees for new policies sold from March 25, 2002, through March 31, 2005, or 36 percent lower than expected. Some of the lower-than-expected enrollment can be explained by the low participation rates for active military members and Postal Service employees. Additionally, according to Partners, the terrorist attacks in the fall of 2001 resulted in slower sales of discretionary products, such as long-term care insurance, and also resulted in temporarily reduced access to federal employees, military members, and Postal Service employees for marketing purposes during the open enrollment period. A representative of Partners and an expert knowledgeable about the federal program indicated that a pool of at least 200,000 enrollees is adequate for the federal program to achieve financial stability, although no minimum number was ever formally established. The federal program focused its marketing efforts on a core group of nearly 6 million people out of an estimated eligible population of almost 19 million people, and the majority of the enrollees came from this core group. The core group consisted of 1.8 million active federal civilian employees, 1.4 million active military members, 0.8 million active Postal Service employees, and 1.8 million spouses of active employees and military members, as shown in table 3. Almost two-thirds of the 218,890 people enrolled from March 25, 2002, through March 31, 2005, came from this core group. According to OPM officials, the federal program also reached out to retired federal employees, retired military members, and retired Postal Service employees. The enrollees during the first 3 years of the federal program represented about three-quarters of the applications submitted. The federal application approval rate of 74 percent was similar to the average approval rate of 75 percent for individual products, but lower than the average approval rate of 84 percent for group products, which may enroll active workers using guaranteed issue during an open enrollment period. The most common reasons for denial of an application for the federal program and for the group products were height and weight outside of insurable standards, a chronic condition such as diabetes or cardiac problems, and cognitive impairment. In addition to these reasons, the most common reasons for denial of an application for the individual products included cancer, stroke, and musculoskeletal problems. The average age of federal enrollees was 56 years at the time of enrollment, compared with an average age of 60 for enrollees in individual products and 52 for enrollees in group products, as shown in table 4. The average age was 54 for enrollees in CalPERS. In the individual market, carriers typically target older adults who are planning for retirement or who have already retired. The carriers are able to market to them through direct contact from commission-based agents. In the group market, carriers typically target active employees, who are younger than enrollees typically marketed to in the individual market. The carriers market to the active employees through the employer via mailings and on-site enrollment meetings, but may not be able to obtain contact information for retirees. Unlike much of the group market, the federal program does have access to retiree contact information and is able to market to retirees through mailings to their home addresses. More women than men enrolled in the Federal Long Term Care Insurance Program, individual products, and group products. (See table 4.) While more women than men enrolled in the federal program overall, slightly more men than women enrolled among eligible active federal employees, active military members, and active Postal Service employees. However, women in these groups enrolled in the federal program at a higher rate than their representation in the eligible population. Enrollees in the federal program from these groups were 51 percent male and 49 percent female, while the eligible population of all active federal employees, active military members, and active Postal Service employees was 67 percent male and 33 percent female. The early claims experience of the Federal Long Term Care Insurance Program was below the expectations established by Partners. During its first 3 years, the federal program paid 39 percent of what it initially expected to pay for claims per enrollee; the number of claims paid per enrollee also was lower than initial expectations. It is still too early to determine whether this trend will continue or whether adjustments to the expected claims experience or premiums are needed. About half of the total amount of claim payments was spent on facility care. The most common medical conditions prompting claims in the early years of the federal program were cancer, stroke, and injuries and poisoning. Across the individual and group products we reviewed, the most common medical conditions that prompted claims were also cancer, stroke and injuries, as well as cognitive problems, musculoskeletal disorders, cardiac disease, and arthritis. The cumulative claims experience in the first 3 years of the Federal Long Term Care Insurance Program was considerably lower than the expectations established by Partners. The program paid 39 percent of the claims expenditures expected per enrollee for long-term care services and paid 33 percent of the expected number of claims per enrollee, as shown in table 5. While the overall claims experience for the first 3 years was lower than expected, the number of claims paid as a percentage of expected claims in each consecutive year of operation was higher than the previous year. In the first year of operation, the amount paid for claims per enrollee was 40 percent of expected payments and the number of claims per enrollee was 4 percent of expected claims. By the third year of operation, the amount paid for claims per enrollee had remained level at 40 percent of expected payments, while the number of claims per enrollee had increased to 48 percent of expected claims. It is still too early to determine whether the early claims experience will continue or whether adjustments to the expected claims experience or premiums are indicated. While having lower-than-expected claims experience is a positive financial indicator, if the claims experience is significantly lower than expected over the longer term, then it is possible that the premiums are too high. On the other hand, in accordance with NAIC premium-setting guidelines, it may be appropriate to project the claims experience assuming moderately adverse results to protect against the need to raise premiums. As noted earlier, it is expected that the number of claims submitted in the first years of a long-term care insurance program will be a small percentage of the claims submitted over time— most claims are not expected to be submitted until 25 years or more after the program begins. Additionally, the expected claims experience is sensitive to factors such as the level of underwriting, the total number of enrollees, the ages of the enrollees, and the types of enrollees—for example, active workers, retirees, or relatives. Furthermore, the claims experience is only one of many factors—such as interest rates, lapse rates, and mortality rates—that affect the long-term financial outlook of the program. The financial projections for long-term care insurance are sensitive to changes in assumptions about all these factors. Figure 3 shows the amount of paid claims per 10,000 enrollees and figure 4 shows the number of paid claims per 10,000 enrollees during the first 3 years of the program compared with the expected claims experience over the first 35 years of operation. Facility care accounted for a considerable portion of the federal program claim payments in the first 3 years. Of the total $3.6 million it paid for claims in the 3-year period, the federal program spent 49 percent on facility care, 3 percent on home care, 22 percent on informal caregivers, and 27 percent on other care. While about half of the total claim payment amount was spent on facility care, this type of care represented less than a quarter of the total number of claims. Generally, most early long-term care insurance claims are submitted for conditions such as cognitive problems, cancer, arthritis, stroke, and injuries. For the federal program, the most common medical conditions that prompted claims during this relatively early period were cancer, stroke, and injuries and poisoning. Across the individual and group products we reviewed, the most common medical conditions that prompted claims were also cancer, stroke and injuries, as well as cognitive problems, musculoskeletal disorders, cardiac disease, and arthritis. The Federal Long Term Care Insurance Program generally compared favorably with other products we studied during the first 3 years it offered coverage. The federal program offered benefits comparable to other products at competitive premium rates for similar benefits. Ultimately, the premium any enrollee pays for a long-term care insurance product is affected by several different factors, including the benefit options purchased, the age of the enrollee at the time of purchase, applicable discounts or surcharges, and the results of underwriting decisions. In addition, the premium is affected by the underlying assumptions about what will happen in the future regarding the number and dollar value of claims filed, interest rates, mortality rates, and lapse rates. If the actual claims experience, interest rates, mortality rates, or lapse rates vary significantly from what was expected, then this could mean that the premiums were too low or too high, and that premium or benefit adjustments could be warranted. Because the federal program had been offering coverage for only about 3 years at the time of our study, it was too early to draw conclusions about the claims experience, especially in relation to the premiums charged. Consistent with other long-term care insurance products, the federal program expected most enrollees, who averaged 56 years old when they enrolled, to submit long-term care insurance claims in their mid-70s to mid-80s—the time when most claims are submitted. While the early claims experience of the federal program was considerably lower than initially projected before the program began, an assessment of the claims submitted during the next several years and of other factors that affect the financial performance of the program will begin to provide a clearer picture of the longer-term implications. We recommend that the Director of OPM take the following two actions. First, the Director should analyze the reasons for the lower-than-expected early claims experience and, as appropriate, use the results of this analysis to modify assumptions about the expected claims experience. Second, the Director should analyze the projections for the amount of premiums to be collected to pay for claims, including an analysis of the assumptions made for the projections that are related to future claims experience and other factors affecting premiums. OPM should report both analyses to Congress prior to the next contract negotiations. We provided a draft of this report to OPM, Partners, CalPERS, and five long-term care insurance carriers. In its written comments, OPM generally agreed with our findings and provided comments on our recommendations. OPM stated that it intends to consider this report when performing due diligence before making a decision about a new contract for administration of the Federal Long Term Care Insurance Program, in accordance with the Long-Term Care Security Act. OPM also stated that the discussion of claims experience and premium setting in this report provides all the information currently available, precluding the need for a specific report on these issues at this time. OPM commented that it would provide updated information on claims experience and premium setting in its written recommendation to Congress prior to making a decision about the next contract. We support OPM’s willingness to consider updated information on claims experience and premium setting as it works with Congress in determining the next contract for the Federal Long Term Care Insurance Program, and we agree that a separate report will not be necessary. We believe that it is important that actuarial assumptions about future claims experience and premium setting reflect the experience of the program to date while still anticipating moderately adverse assumptions regarding claims experience and other factors in the future. In its comments, OPM also indicated that the expectations about the federal program’s enrollment and claims experience were established by Partners prior to the start of the program, rather than established by the marketplace. We revised the report to reflect that the expectations about enrollment and claims were established by Partners. (OPM’s comments are reprinted in app. IV.) In its written comments, Partners stated that the recommendation in the draft report implied that claims experience is the determining factor in the pricing of premiums, but that other sections of the report explain that other factors in addition to claims experience affect pricing, such as interest rates and lapse rates. We clarified the recommendation to reflect that other factors in addition to claims experience affect the pricing of premiums. OPM, Partners, and one carrier provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Director of OPM and interested congressional committees. We will also provide copies to others on request. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7119 or dickenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To evaluate the competitiveness of the Federal Long Term Care Insurance Program, we surveyed Long Term Care Partners, LLC (referred to as Partners), the administrator of the federal program; the California Public Employees’ Retirement System (CalPERS), the second largest long-term care insurance group in the nation after the federal program; and five of the largest long-term care insurance carriers to obtain long-term care insurance data. The five insurance carriers were AEGON USA, Bankers Life and Casualty Company, Genworth Financial, John Hancock Life Insurance Company, and Metropolitan Life Insurance Company. All five carriers sold policies in the individual market, and two of the five carriers—John Hancock Life Insurance Company and Metropolitan Life Insurance Company—were also among the five largest carriers that sold products in the group market. To supplement these data, we interviewed officials at the Office of Personnel Management (OPM); Partners; the five carriers; CalPERS and the organization that administers its program; and five trade associations, including one representing actuaries. We also interviewed three experts on long-term care insurance. In addition, we reviewed studies and literature addressing long-term care insurance. We conducted our work from March 2005 through February 2006 in accordance with generally accepted government auditing standards. We developed a data-collection instrument to obtain uniform long-term care insurance data from Partners, CalPERS, and the five carriers. In developing the instrument, we attempted to collect as much data as possible while also considering the burden our request would place on the respondents. Because of the proprietary nature of much of the data we requested from the five carriers, we agreed to report the data so that they could not be attributed to any specific carrier unless the carrier agreed that we could release the data. To capture the full experience of the Federal Long Term Care Insurance Program, we requested data from Partners for the period March 25, 2002—the first day of the federal early enrollment period—through March 31, 2005. We requested data from the carriers and CalPERS for the period July 1, 2002, through March 31, 2005. To document the early enrollment and open enrollment periods for the federal program, we also requested data from Partners for the period of March 25, 2002, through February 7, 2003—the last day the open enrollment period applications were processed. From each source, we requested data for the following categories: benefits, premiums, administrative costs, enrollment and enrollee characteristics, and claims experience. We requested data on the number of enrollees in the individual and group markets (including the federal program and CalPERS) who chose selected benefit options for new long-term care insurance policies sold from July 1, 2002 (March 25, 2002, for the federal program) through March 31, 2005. We collected data on coverage types, daily benefit amounts, elimination periods, benefit periods, inflation-protection options, Health Insurance Portability and Accountability Act of 1996 (HIPAA) tax-qualification status, and optional benefits offered. The respondents determined which policies they considered to be sold during the period. While we asked the respondents, where possible, to report data for sold policies that became active and for which they collected premiums, two respondents reported benefit data for the number of new applications submitted rather than for the new policies sold during the period. Because the federal program offered long-term care insurance coverage in four prepackaged plans, we asked Partners to identify the most popular benefit package chosen. (Table 6 in app. II summarizes the four prepackaged plans offered in the federal program.) We asked Partners to provide annual premiums for a policy sold on March 31, 2005, for enrollees in each of the four prepackaged benefit plans offered by the federal program, with automatic compound inflation protection. Partners provided premium data for enrollees of four different ages—40, 50, 60, and 70 years old—for the four benefit packages, with automatic compound inflation protection and with a future purchase option for inflation protection. Because the federal program provided no discounts for spouses, we doubled the premiums that single people paid to determine how much a married couple of the same age would pay annually for a long-term care insurance policy through the federal program. To compare premiums with those in the federal program, we asked the five carriers selling products in the individual insurance market and CalPERS to provide annual premium data for the four federal benefit packages or for coverage that most closely resembled each package for people for the four ages—40, 50, 60, and 70 years old—for policies sold on March 31, 2005. Two of the five carriers selling individual products did not sell facilities-only coverage on March 31, 2005, so we did not include the premiums for facilities-only coverage in our analyses. Therefore, we compared premiums for the three comprehensive benefit packages in the federal program. Because carriers selling products in the individual market usually place people in rating categories according to their health and other criteria, we asked them to provide annual premiums for a single person underwritten into the standard rating category, which is the category most often used. Furthermore, as these carriers usually offer discounts for married couples, we asked them to provide annual premiums for a married couple of the same age underwritten into the standard rating category. The premiums the carriers reported reflected their discounts for couples, which in each case was either 30 percent or 40 percent. We also asked respondents to identify the coverage types, daily benefit amounts, elimination periods, benefit periods, and inflation protection for the packages if these benefits differed from those of the federal benefit packages. These are the benefit options that most affect premiums. We did not ask them to identify other benefits automatically included in the coverage or to identify the percentage of the daily benefit amount that the package covered for benefits such as formal home care or informal home care, if included. Other than for CalPERS—which, like the federal program, did not use rating categories or provide discounts for spouses— we did not request any premium data for other group products because of the variation that exists across the groups insured by each carrier. To compare the amount of premium spent on claims and the costs associated with administering long-term care insurance in the federal program with that of other products, we collected information on anticipated lifetime loss ratios. The anticipated lifetime loss ratio represents the present value of the total expected claim payments compared with the present value of the total expected premiums over the life of a set of policies. This ratio describes what portion of the premium dollar is expected to pay for claims over a long period, with the balance going to administrative costs and profits. We collected these data for new policies sold from July 1, 2002 (March 25, 2002, for the federal program) through March 31, 2005. Two respondents did not provide data on loss ratios. We asked for enrollment information from Partners, CalPERS, and the five carriers for new policies sold from July 1, 2002 (March 25, 2002, for the federal program) through March 31, 2005, including the number of new applications submitted and approved. We obtained data on selected enrollee characteristics, including age at time of enrollment and sex. We collected claims-related data from our study participants. For example, we obtained the primary medical conditions that prompted the claims from Partners, CalPERS, and the five carriers. However, because of differences in enrollee characteristics and benefit choices across the carriers that would affect the claims experience, we focused primarily on the early claims experience of the federal program. We collected data on the number of paid claims and the amount of claim payments during the 3-year study period from Partners. We also compared the anticipated claims experience for the federal program as projected prior to initial enrollment with the actual number of claims submitted and the actual amount of claim payments during the 3-year period. To learn about long-term care insurance and to discuss the type of data we wanted to obtain through our data request, we interviewed officials at OPM and Partners, a CalPERS official and an official from the organization that administers the program, and officials at five carriers that sold products in the individual market—two of these carriers were also among the five largest carriers that sold products in the group market. We also conducted follow-up interviews to clarify the data provided in response to our data request, to verify reliability of the data received, and to obtain additional information. We interviewed officials at several groups and associations as well as long- term care insurance experts. To obtain broader-based information about long-term care insurance, we interviewed the Director of Long-Term Care at America’s Health Insurance Plans and the Senior Director of Long-Term Care Insurance at the American Council of Life Insurers. We interviewed actuaries and health policy staff from the American Academy of Actuaries. To learn more about state regulation of long-term care insurance products we contacted officials at the National Association of Insurance Commissioners. We interviewed officials at AARP to learn about long-term care insurance and the products offered through that association. We also interviewed three experts on long-term care insurance. We also reviewed studies on long-term care insurance, including a longitudinal study on buyers and nonbuyers of long-term care insurance. In addition, we reviewed a literature review and six policy briefs commissioned by the Department of Health and Human Services, Office of the Assistant Secretary for Planning and Evaluation, all dated August 2004. The Federal Long Term Care Insurance Program offered enrollees the option of choosing a prepackaged benefit plan or of customizing benefits. In addition, some applicants for federal benefits who were denied regular coverage had another option available that offered nursing-home-only coverage, called the Alternative Insurance Plan, while all applicants denied coverage could purchase a Service Package, which did not provide insurance but offered services such as access to a person who coordinated care and to a discounted network of long-term care providers. Nearly two- thirds of all federal enrollees during the period March 25, 2002, through March 31, 2005, chose a prepackaged benefit plan, with the remaining enrollees during that period customizing benefits or enrolling in the Alternative Insurance Plan. Enrollees in the Federal Long Term Care Insurance Program could choose from four prepackaged benefit plans. In each of the plans, several benefit options—daily benefit amount; coverage period; elimination period; and maximum lifetime benefit, which is a combination of the daily benefit amount and benefit period—had been preselected into the packages along with all covered services. After selecting one of the four packages, the enrollee only had to choose the type of inflation protection—either automatic compound or the future purchase option. Table 6 shows the four prepackaged plans offered in the Federal Long Term Care Insurance Program. The federal program included care-coordination benefits and coverage for international benefits and had no war exclusion. Federal program care coordinators provide, among other services, general information about long-term care services; assess and approve need for care; develop a care plan; and monitor and reassess services. Using these services did not reduce an enrollee’s maximum lifetime benefits. Care-coordination services were also available to qualified relatives, who did not need to be enrolled in the program, although some services could be provided at an additional charge. Coverage for benefits received outside the United States was available at 80 percent of the maximum amounts that would otherwise be payable, with certain restrictions. Although the federal program did not have a war exclusion, it included a catastrophic-coverage limitation; that is, a catastrophic event could limit the benefit period. The federal program did not include several benefits and services that could be available in other products. For example, the federal program did not offer limited pay policies, in which the long-term care insurance policy could be paid up over a limited period of time; restoration of benefit options, in which any of the policy’s maximum benefits that has been used could be replaced if the enrollee did not receive benefits for a specified period of time; or any discounts for both spouses of a married couple purchasing coverage, all of which were options available in individual products. Federal program enrollees had several options for customizing benefits instead of choosing a prepackaged benefit plan. Within certain parameters, federal enrollees could design their own plans by mixing and matching benefit options. In total, the federal program provided for 528 design variations. In addition to the benefit options selected, covered services listed in table 6 were automatically included. Table 7 shows the type and number of benefit options available to enrollees. Applicants denied regular coverage in the Federal Long Term Care Insurance Program had other benefit options. Some federal employees, members of the uniformed services, and their spouses who could apply for the federal program using an abbreviated application, but who were denied regular coverage, were offered coverage in the Alternative Insurance Plan. This plan covered nursing homes only, had a 180-day elimination period, provided coverage for 2 years, and started with a weekly benefit amount of $200. In addition, all applicants for the federal program who were denied coverage could purchase a Service Package for an annual fee. This noninsurance option provided access to care- coordination services and discounts. Nearly two-thirds of the 218,890 people who enrolled in the federal program from March 25, 2002, through March 31, 2005, chose one of the four prepackaged benefits. As shown in table 8, 35 percent of the enrollees customized their benefits within the ranges offered by the federal program, and less than 1 percent enrolled in the Alternative Insurance Plan. Overall, 141,195 people enrolled in one of the four prepackaged benefit plans. Of the enrollees who chose a prepackaged benefit plan, 12 percent enrolled in the facilities-only package. The remainder, 88 percent, enrolled in one of the three comprehensive packages. Most of the people who enrolled in a comprehensive package enrolled in the Comprehensive 100 package. About two-thirds of all the enrollees choosing a prepackaged benefit plan also chose automatic compound inflation protection. Table 9 shows federal program enrollment in the four prepackaged benefit plans. Annual premiums for three comprehensive benefit packages offered in the Federal Long Term Care Insurance Program compared favorably with premiums at five carriers selling similar products in the individual insurance market. Tables 10 through 12 show that federal premiums for each of the three benefit packages were always lower than the average premium at five carriers for single people and for married couples who were both the same age. When considering the range of premiums available at the five carriers selling similar individual products, federal premiums for a single person were always lower than the premiums for individual products, while married couples who were both the same age could find lower premiums in the federal program in almost every case. In addition to the contact named above, Christine Brudevold, Assistant Director; Laura Sutton Elsberg; Elizabeth T. Morrison; Michelle Murray; and Joseph Petko made key contributions to this report.
The Long-Term Care Security Act required the federal government to offer long-term care insurance to its employees, their families, and others. The act also required GAO to conduct a study of the competitiveness of the Federal Long Term Care Insurance Program, which began in 2002, compared with individual and group products generally available in the private market. GAO compared the federal program's benefits, premiums, enrollment rates, and enrollee characteristics with other products over a 3-year period. GAO also compared the federal program's early claims experience with initial expectations. Program offered benefits similar to those of other long-term care insurance products GAO reviewed. Most enrollees in the federal program and in individual and group products chose similar benefit amounts, elimination or waiting periods, and benefit periods. The federal program usually offered lower premiums than individual products for comparable benefits. Overall, annual premiums for the federal program averaged across three benefit plan designs were 46 percent lower for single people and 19 percent lower for married couples who were both the same age in comparison with similar individual products sold on March 31, 2005. The participation rate in the Federal Long Term Care Insurance Program for active federal civilian employees--5 percent--was comparable to the industry average in the group market, although enrollment in the federal program was lower than initially expected. The average age of all enrollees in the federal program was younger than the average age of enrollees in individual products and older than the average age of enrollees in group products. The Federal Long Term Care Insurance Program paid 39 percent of what it initially projected to pay for claims per enrollee. The number of claims paid per enrollee was also lower than initial projections. While the early claims experience was below expectations, it is still too early to determine whether this trend will continue or whether adjustments to the projected claims experience or premiums are indicated, because most claims are not expected to be submitted for many years.
Both the Clean Water and Drinking Water SRF programs authorize EPA to provide states and local communities with independent and sustainable sources of financial assistance, such as low- or no-interest loans, for projects that protect or improve water quality and that are needed to comply with federal drinking water regulations and protect public health. The Clean Water SRF program was established in 1987 under the Clean Water Act, which was enacted to protect surface waters, such as rivers, lakes, and coastal areas, and to maintain and restore the physical, chemical, and biological integrity of these waters. The Drinking Water SRF program was established in 1996 under the Safe Drinking Water Act, which was enacted to establish national enforceable standards for drinking water quality and to guarantee that water suppliers monitor water to ensure compliance with standards. The Recovery Act provided $6 billion for EPA’s Clean Water and Drinking Water SRF programs. This amount represents a significant increase over the federal funds awarded to the base SRF programs in recent years. From fiscal years 2000 through 2009, annual appropriations averaged about $1.1 billion for the Clean Water SRF program and about $833 million for the Drinking Water SRF program. The Recovery Act funds represent a significant federal investment in the nation’s water infrastructure at a time when, according to a 2010 Congressional Budget Office report, overall spending on infrastructure has been declining, and when reported problems with the quality and safety of water supplies have raised questions about the condition of the nation’s infrastructure. In addition to increasing funds, the Recovery Act included some new requirements for the SRF programs. First, projects funded with Recovery Act SRF program funds had to be under contract—ready to proceed— within 1 year of the act’s passage, or by February 17, 2010. Second, states had to use at least 20 percent of these funds as a “green reserve” to provide assistance for green infrastructure projects, water- or energy- efficiency improvements, or other environmentally innovative activities. Third, states had to use at least 50 percent of Recovery Act funds to provide “additional subsidies” for projects in the form of principal forgiveness, grants, or negative interest loans (loans for which the rate of interest is such that the total payments over the life of the loans are less than the principal of the loans). Uses for these additional subsidies can include helping economically disadvantaged communities build water projects, although these uses are not a requirement of the act. Congress incorporated two of these requirements—green projects and additional subsidies—into the fiscal year 2010 and 2011 non-Recovery Act, or base, SRF program appropriations. In addition to program-specific provisions, water projects receiving Recovery Act funds had to meet the act’s Buy American and Davis-Bacon provisions. The Recovery Act generally requires that all of the iron, steel, and manufactured goods used in the project be produced in the United States, subject to certain exceptions. Federal agencies could issue waivers for certain projects under specified conditions, for example, using American-made goods was inconsistent with the public interest or i f the cost of goods was unreasonable; the act limits the “unreasonable co st” exception to those instances when inclusion of American-made iron, steel, or other manufactured goods would increase the overall project cost by more than 25 percent. Furthermore, agencies do not need to use American-made goods if they were not sufficiently available or not of satisfactory quality. In addition, the Recovery Act applied Davis-Bacon provisions to all Recovery Act-funded projects, requiring contractors and subcontractors to pay all laborers and mechanics at least the prevailing wage rates in the local area where they were employed, as determined by Contractors were required to pay these workers the Secretary of Labor. weekly and submit weekly certified payroll records. To enhance transparency and accountability over Recovery Act funds, Congress and the administration built numerous provisions into the act, including a requirement that recipients of Recovery Act funding— including state and local governments, private companies, educational institutions, nonprofits, and other private organizations—report quarterly on a number of measures. (Recipients, in turn, may award Recovery Act funds to subrecipients, which are nonfederal entities.) These reports are referred to as “recipient reports,” which the recipients provide through one Web site, www.federalreporting.gov (Federalreporting.gov) for final publication through a second Web site, www.recovery.gov (Recovery.gov). Recipient reporting is overseen by the responsible federal agencies, such as EPA, in accordance with Recovery Act guidance provided by the Office of Management and Budget (OMB). Under this guidance, the federal agencies are required to conduct data quality checks of recipient data, and recipients can correct the data, before they are made available on Recovery.gov. Furthermore, additional corrections can be made during a continuous correction cycle after the data are released on Recovery.gov. A significant aspect of accountability for Recovery Act funds is oversight of spending. According to the federal standards of internal control, oversight should provide managers with current information on expenditures to detect problems and proactively manage risks associated with unusual spending patterns. In guidance issued in February 2009, OMB required each federal agency to develop a plan detailing the specific activities—including monitoring activities—that it would undertake to manage Recovery Act funds. EPA issued its first version of this plan in May 2009, as required, and updated this document as OMB issued new guidance. Nationwide, the 50 states have awarded and obligated the almost $6 billion in Clean Water and Drinking Water SRF program funds provided under the Recovery Act and reported using the majority of these funds for sewage treatment infrastructure and drinking water treatment and distribution systems, according to EPA data. In the nine states we reviewed, states used these funds to pay for infrastructure projects that help to address major water quality problems, although state officials said that in some cases, Recovery Act requirements changed their priorities or the projects selected for funding. The nine states also used their Recovery Act funding to help economically disadvantaged communities, although officials indicated that they continue to have difficulty helping these communities. As of March 30, 2011, states had awarded funds for contracts and obligated the $4 billion in Clean Water SRF program funds and $2 billion in Drinking Water SRF program funds provided under the Recovery Act. As we reported in May 2010, EPA indicated that all 50 states met the Recovery Act requirement to award Recovery Act funds to contracted projects by February 17, 2010, 1 year after the enactment of the Recovery Act. In the 2 years since the Recovery Act was passed, approximately 79 percent, or $3.1 billion, of the Clean Water SRF program funds and approximately 83 percent, or $1.7 billion, of the Drinking Water SRF program funds have been drawn down from the Treasury by states. Across the nation, the states have used the $6 billion in Recovery Act Clean and Drinking Water SRF program funds to support more than 3,000 water quality infrastructure projects. As shown in figure 1, the states used the majority of their Recovery Act Clean Water SRF program funds to pay for secondary and advanced treatment at wastewater treatment plants, as well as projects to prevent or mitigate sanitary sewer overflow. Wastewater treatment involves several processes, including primary treatment to remove suspended solids; secondary treatment to further remove contaminants using biological processes; and tertiary or advanced treatment to remove additional material in wastewater, such as nutrients or toxic chemicals. Sanitary sewer overflows can occur as a result of inclement weather and can pose significant public health and pollution problems, according to EPA. As shown in figure 2, the states used about half of their Recovery Act Drinking Water SRF program funds to pay for projects to transmit and distribute drinking water, including pumps and pipelines to deliver water to customers. States used about 40 percent of their funds for projects to treat and store drinking water. In addition to requiring that projects awarded funds be under contract within 1 year of the act’s passage, the Recovery Act required that states use at least 20 percent of their funds for “green” projects. According to EPA data, all states met the 20-percent green requirement, with $1.1 billion of total Clean Water SRF program funds going to green projects and $544 million of total Drinking Water SRF program funds going to green projects. The goal of supporting green projects is to promote green infrastructure, energy or water efficiency, and innovative ways to sustainably manage water resources. Green infrastructure refers to a variety of technologies or practices—such as green roofs, porous pavement, and rain gardens—that use or mimic natural systems to enhance overall environmental quality. In addition to retaining rainfall and snowmelt and allowing them to seep into groundwater, these technologies can mitigate urban heat islands, and sequester carbon. Figure 3 shows the amount of Clean Water and Drinking Water SRF program funds that states awarded to green projects by type of project. Nationwide, states also met the Recovery Act requirement to provide at least 50 percent of the Clean Water and Drinking Water SRF program funds as additional subsidies in the form of principal forgiveness, negativ interest loans, or grants. Of the total Recovery Act funds awarded, 76 e percent of Clean Water SRF Recovery Act funds and 70 percent of Drinking Water SRF Recovery Act funds were distributed as additional subsidies. Figure 4 shows the total Clean Water and Drinking Water Recovery Act funds awarded by states as principal forgiveness, negative interest loans, or grants. The remaining funds will be provided as low- or no-interest loans that will recycle back into the programs as subrecipients repay their loans. In the nine states we reviewed, Recovery Act Clean and Drinking Water SRF funding has been used to address the major clean and drinking water problems in the state. The nine states we reviewed received a total of about $832 million in Recovery Act SRF program funds—about $579 million for their Clean Water SRF programs and about $253 million for their Drinking Water SRF programs. In total, these funds supported 419 clean and drinking water projects. Officials in the states we reviewed said, however, that Recovery Act priorities—particularly the need for projects to be under contract 1 year after the passage of the Recovery Act or green projects—either changed their priorities for ranking and funding projects or changed the projects they funded. To award SRF program funds, each of the nine states we reviewed used a system to score and prioritize water projects seeking funds to address water quality problems. To do this, states generally rank or group water infrastructure projects, submitted by local municipalities or utilities, using a system of points. The projects with the most points are considered the highest priority on the list of projects for funding and, in all but one state we reviewed, state officials used their ranking system to address major water problems. In most of the nine states we reviewed, compliance is a key aspect of their ranking system, allowing points to be awarded to infrastructure projects that help the states eliminate causes of noncompliance with federal or state water quality standards and permits. Officials in most of the nine states said that they generally obtain information on their water systems’ compliance with federal and state water quality standards through discussions with their program compliance staff and from state databases. Officials in the nine states we reviewed told us that the Recovery Act requirements—the readiness of a project to proceed; the green project requirement; and, to a lesser degree, the Buy American and Davis-Bacon provisions—caused them to modify their ranking systems or otherwise modify the list of projects that receive Recovery Act funding. Readiness of a project to proceed. In the nine states, officials included readiness to proceed and other Recovery Act requirements in their ranking system and selected projects on the basis of that ranking system or said that they did not fund—or bypassed—top-ranked projects that were not ready to proceed to construction by February 17, 2010, 1 year after the passage of the Recovery Act. For example, Washington State’s two top- ranked clean water projects did not receive Recovery Act SRF program funds because they could not meet the February 2010 deadline. The projects were to decommission septic systems and construct a wastewater treatment plant to reduce phosphorus discharges to the Spokane River. In Wyoming, many of the projects that were not ready to proceed were water treatment plants, which state officials said take longer to design and plan for construction. Although these higher-ranked projects did not receive Recovery Act funds, at least two states were able to fund these projects in other ways, such as through state grants or non-Recovery Act SRF program funds. Green project requirement. Three states listed green projects separately from other projects. For example, Washington State officials told us that they established a green projects category because they had anticipated that energy and water efficiency projects (green projects) would not score well under their ranking system, which focuses on water quality protection and improvements. Other states funded green projects ahead of higher- ranked projects. For example, Maryland bypassed many projects to fund the first green-ranked project on its list. Similarly, Nevada did not fund 11 higher-ranked projects and funded a lower-ranked drinking water project that had green components. Buy American and Davis-Bacon provisions. State officials identified a few projects that did not proceed because potential subrecipients either did not want to meet one or more Recovery Act requirements, such as the Buy American and Davis-Bacon provisions, or did not want to increase the cost of their projects. For example, local officials in Alabama withdrew their application for a drinking water project because the project was already contracted without Buy American and Davis-Bacon wage requirements, and an addendum to the contract to meet the regulations would have increased the project’s cost. Similarly, officials in all nine states said that a few communities chose not to apply for or withdrew from the Recovery Act funding process to avoid paperwork or the additional costs associated with the act’s requirements. For example, Wyoming officials said that potential subrecipients for three clean water projects refused funding, citing time constraints or difficulty meeting Buy American requirements. Although the Recovery Act did not require states to target Clean and Drinking Water SRF program funds to economically disadvantaged communities, six of the nine states that we reviewed distributed more than $123 million in clean water funds, and eight of the nine states distributed almost $78 million in drinking water funds, to these communities. This amount represents about 24 percent of the almost $832 million in Recovery Act funds that the states were awarded. As shown in table 2, a large majority of the funds provided to these communities were provided as additional subsidies—grants, principal forgiveness, and negative interest loans. According to officials in five states, they provided additional subsidies to economically disadvantaged communities because the communities would otherwise have had a difficult time funding projects. For example, officials in Nevada told us that clean and drinking water subsidies were directed to such communities because these communities not only have a difficult time funding projects, they also have some of the projects with the highest priority for addressing public health and environmental protection concerns. New Mexico officials told us that they directed additional drinking water subsidies to economically disadvantaged communities because these communities have historically lacked access to capital. In addition, officials in a few other states told us that small and economically disadvantaged communities often lack the financial means to pay back loans from the SRF programs or lack funds to pay for the upfront costs of planning and designing a project. Officials in at least two states also said that many small and economically disadvantaged communities even lack full-time staff to help manage the water infrastructure. Even with the additional subsidies available for projects, officials in a few states said that small and economically disadvantaged communities found it difficult to obtain Recovery Act funds. For example, Missouri officials told us that the Recovery Act deadline was the single most important factor hindering the ability of small and economically disadvantaged communities from receiving funding. New Mexico officials also told us that because small and economically disadvantaged communities typically do not have funds to plan and develop projects, few could meet the deadline and several projects that sought Recovery Act funds could not be awarded funding owing to the deadline. EPA’s Office of Inspector General (OIG) noted an additional challenge for EPA related to economically disadvantaged communities. In April 2011, the OIG reported that EPA could not assess the overall impact of Recovery Act funds on economically disadvantaged communities because it did not collect data on the amount of SRF program funds distributed to economically disadvantaged communities nationwide. The OIG recommended that EPA establish a system that can target program funds to its objectives and priorities, such as funding economically disadvantaged communities. For the quarter ending December 2009 through the quarter ending in June 2010, the number of full-time equivalent jobs (FTE) paid for with Recovery Act SRF program funds increased each reporting quarter from about 6,000 to 15,000 quarterly FTEs for planning, designing, and building water projects, as shown in figure 5. As projects are completed and funds spent, the number of FTEs funded has declined to about 6,000 for the quarter ending March 2011. Following OMB guidance, states reported FTEs that included only the jobs directly paid for with Recovery Act funding, not the employment impact on suppliers of materials (indirect jobs) or on the local communities (induced jobs). In addition, state officials told us that, although funding varies from project to project, 10 percent to 80 percent of a project’s funding is typically for materials such as cement for buildings and equipment such as turbines, pumps and centrifuges, and the remainder pays for labor or FTEs. To oversee Recovery Act projects and funds, EPA developed an oversight plan, as required by OMB. In response to our May 2010 bimonthly review and recommendation, EPA updated its guidance to include specific steps to monitor compliance with Recovery Act Clean and Drinking Water SRF program provisions. Our current work is showing that EPA and the states have made progress in implementing EPA’s updated plan, which included details on frequency, content, and documentation needed for regional reviews of state programs and state reviews of projects. EPA officials said that regional staff are visiting all 50 states and reviewing their Clean and Drinking Water SRFs according to its plan. Furthermore, officials in the nine states we reviewed indicated that they have visited Recovery Act projects at least once during construction, as required in EPA’s oversight plan. Our May 2010 report identified the challenge of maintaining accountability for Recovery Act funds and recommended improved monitoring of Recovery Act funds by EPA and the states. As we note above, our current work shows that EPA and the nine states we reviewed have made progress in addressing this challenge. Two challenges EPA and the states faced in spending Recovery Act SRF program funds may continue as requirements introduced with the Recovery Act are incorporated into the base programs. Specifically, in fiscal years 2010 and 2011, the Clean and Drinking Water SRF programs were required to include green projects and additional subsidization provisions. Encouraging green projects. The effort to support green projects was included in EPA’s fiscal year 2010 and 2011 appropriations for the base Clean and Drinking Water SRF programs. As we discussed above, under the green requirement in the Recovery Act, in certain cases state officials said they had to choose between a green water project and a project that was otherwise ranked higher to address water quality problems. We found similar results in our May 2010 report, when officials in some of the 14 states we reviewed said that they gave preference to green projects for funding purposes, and sometimes ranked those projects above another project with higher public health benefits. In addition to competing priorities for funding, EPA’s OIG found, in its February 2010 report, that a lack of clear guidance on the green requirement caused confusion and disagreements as to which projects were eligible for green funding. Officials in two of the nine states we reviewed noted that the goal of supporting green projects was not difficult to achieve because they had already identified green projects, but officials in four other states said that achieving the 20-percent green project goal was difficult to achieve, leading one official to suggest that green projects be encouraged without setting a fixed percentage of program funds. Providing subsidization. The fiscal years 2010 and 2011 appropriations for the Clean and Drinking Water SRF programs also continued the requirement to provide additional subsidies in the form of principal forgiveness, negative interest loans, or grants. The subsidy provisions reduced the funds available to use as a subsidy from a minimum of 50 percent of total Recovery Act funds to a minimum of 30 percent of base SRF program funds. As with the Recovery Act, the appropriations in fiscal year 2010 and 2011 do not require this additional subsidy to be targeted to any types of projects or communities with economic need, and as the recent EPA OIG report notes, there are no requirements for EPA or the states to track how these subsidies are used. The Clean and Drinking Water SRF programs were created to be a sustainable source of funding for communities’ water and wastewater infrastructure through the continued repayment of loans to states. Officials in four of the nine states we reviewed identified a potential challenge in continuing to provide a specific amount of subsidies while sustaining the clean and drinking SRF programs as revolving funds. State officials pointed out that when monies are not repaid into the revolving fund, the reuse of funds is reduced and the purpose of the revolving SRF program changes from primarily providing loans for investments in water infrastructure to providing grants. Mr. Chairman, Ranking Member, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Committee might have. For further information regarding this statement, please contact David C. Trimble at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Jillian Fasching, Susan Iott, Jonathan Kucskar, Carol Peterson, Beverly Ross, Carol Herrnstadt Shulman, Dawn Shorey, Kathryn Smith, and Kiki Theodoropoulos. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The American Recovery and Reinvestment Act of 2009 (Recovery Act) included $4 billion for the Environmental Protection Agency's (EPA) Clean Water State Revolving Fund (SRF) and $2 billion for the agency's Drinking Water SRF. This testimony is based on GAO's ongoing review of clean and drinking water projects. It provides preliminary observations on (1) the status and use of Recovery Act SRF program funds nationwide and in nine selected states, (2) jobs funded by the Recovery Act SRF programs and federal and state efforts to oversee the programs, and (3) challenges, if any, that states have faced in implementing Recovery Act requirements. For this ongoing work, GAO is, among other things, obtaining and analyzing EPA nationwide data on the status of Recovery Act clean and drinking water funds and projects, as well as information from a nonprobability sample of nine states that it had not reviewed in previous bimonthly reports. These states represent all but one of EPA's 10 regions. GAO is also interviewing EPA and state officials about their experiences with the Recovery Act clean and drinking water funds Nationwide, the 50 states have awarded and obligated the almost $6 billion in Clean Water and Drinking Water SRF program funds provided under the Recovery Act and reported using the majority of these funds for sewage treatment infrastructure and drinking water treatment and distribution systems, according to EPA data. These funds supported more than 3,000 water quality infrastructure projects nationwide. Since the Recovery Act was passed, states have drawn down $3.1 billion (79 percent) of the Clean Water SRF program funds and $1.7 billion (83 percent) of the Drinking Water SRF program funds provided under the Recovery Act. States also met the act's requirements that at least (1) 20 percent of the funds provided be used to support "green" projects, such as those that promote energy or water efficiency, and (2) 50 percent of the funds provide additional subsidies in the form of loans for which the principal is forgiven, loans for which the repayment is less than the principal (negative interest loans), or grants. In the nine states GAO reviewed, Recovery Act funds have paid for 419 infrastructure projects that help to address major water quality problems, although state officials said that in some cases, Recovery Act requirements changed their priorities for ranking projects or the projects selected. For example, because some projects could not meet the act's requirement to have funds under contract by February 17, 2010, some states provided Recovery Act funds to lower-ranked projects. Some states provided funding to these priority projects in other ways, such as through state grants or non-Recovery Act SRF funds. In addition, although not required by the Recovery Act, the nine states used 24 percent of the funds they received to pay for projects in economically disadvantaged communities, the majority of which was provided as additional subsidies. States reported that the Recovery Act SRF programs funded an increasing amount of full-time equivalent (FTE) positions from the quarter ending December 2009 through the quarter ending June 2010, from 6,000 FTEs to 15,000 FTEs, declining to 6,000 FTEs for the quarter ending in March 2011 as projects were completed. EPA and the states are overseeing Recovery Act projects and funds using EPA's oversight plan, updated in June 2010 in response to recommendations GAO made to specify procedures for oversight. The fiscal year 2010 and 2011 appropriations for the SRF programs continue the green project and additional subsidy requirements. State officials GAO interviewed identified challenges in implementing these requirements for the Clean and Drinking Water SRF programs, including: (1) Encouraging green projects. Officials in some states said that the goal of supporting green projects is important but that the percent of funds specifically dedicated to green funds (20 percent) was difficult to achieve. (2) Providing subsidies. Officials in several of the nine states noted that when monies are not repaid into revolving funds to generate future revenue for these funds, the SRF program purpose changes from primarily providing loans for investments in water infrastructure to providing grants.
Consular officers issued about 6.2 million nonimmigrant visas in 1996—an increase of approximately 16 percent over the number issued in 1992. The total budget for consular relations activities has also increased significantly in recent years. The budget grew from about $259 million in fiscal year 1992 to an estimated $470 million in fiscal year 1998. The State Department’s Bureau of Consular Affairs Program Plan for fiscal years 1998-99 (an annually updated planning document containing strategies for executing the Bureau’s mission) notes that the greatest demand for visas is in advanced developing countries such as Brazil and South Korea, among others. Table 1 shows the numbers of nonimmigrant visas issued at the top five nonimmigrant visa-issuing posts in fiscal year 1996. Foreign visitors traveling to the United States are a significant source of revenue for U.S. businesses. According to the Department of Commerce’s International Trade Administration Tourism Industries Office, foreign visitors spent close to $70 billion in the United States in 1996. The office’s figures indicate that Brazilian visitors spent over $2.6 billion in the United States, or more than $2,900 per visit, during the same period. In order to safeguard U.S. borders and control the entry of foreign visitors into the country, U.S. immigration laws require foreign visitors from most countries to have a visa to enter the United States. However, the United States currently waives the requirement for visitor visas for citizens of 26 countries considered to pose little risk for immigration and security purposes. According to a consular official, Brazil does not currently qualify for visa waivers primarily because the refusal rate for Brazilian visa applications exceeds the allowable limit of less than 2.5 percent in each of the previous 2 years and less than a 2 percent average over the previous 2 years. The Department of State has primary responsibility abroad for administering U.S. immigration laws. Consular officers at overseas posts are responsible for providing expeditious visa processing for qualified applicants while preventing the entry of those that are a danger to U.S. security interests or are likely to remain in the United States illegally. State’s Bureau of Consular Affairs develops policies and manages programs needed to administer and support visa-processing operations at overseas posts and has direct responsibility for U.S.-based consular personnel. State’s geographic bureaus, which are organized along regional lines (such as the Bureau of Inter-American Affairs), have direct responsibility for the staffing and funding of overseas consular positions. The process for handling nonimmigrant visas varies among overseas posts. Among the methods used to serve visa applicants, posts (1) receive applicants on a “first-come, first-served” basis, (2) operate appointment systems to schedule specific dates and times for applying, (3) employ travel agencies to act as intermediaries between applicants and the consulate, and (4) use “drop boxes” for collecting certain types of visa applications. Individual posts may use one or various combinations of these approaches. In addition to submitting a written application and supporting documentation, an applicant must be interviewed by a consular officer, unless the interview is waived. Consular officers may request additional documentation to validate the applicant’s intention to return home or confirm that sufficient financial resources are available for the trip. Consular officers are also responsible for deterring the entry of aliens who may have links to terrorism, narcotics trafficking, or organized crime. Nine of the 26 consulates we reviewed, including the one in Sao Paulo, experienced backlogs in processing nonimmigrant visas to the United States in fiscal year 1997. The backlogs ranged from 8 to 52 days and occurred primarily during peak travel seasons for tourists. State does not systematically compile information on visa processing turnaround times at overseas posts nor has it established a time standard for processing visas. However, the Deputy Assistant Secretary for Visa Services indicated that a maximum wait of 1 week (5 business days) for an appointment to apply for a nonimmigrant visa is desirable. She also told us that an additional 1 or 2 days are generally needed to process the visa after the appointment occurs. Thus, we concluded that a maximum desirable total turnaround time for appointment system cases would generally be 7 business days. Since the total turnaround times for other processing methods are generally shorter than for appointment systems, we used 7 business days as a cutoff point beyond which we considered a backlog to exist for all processing methods. Although consulates often manage to process nonimmigrant visa applications within 7 business days during periods of low demand, turnaround times lengthen significantly at some consulates when demand is high. Peak periods generally occur during the summer months or winter holiday season. Of the nine posts that had peak-season backlogs exceeding 7 business days, four had turnaround times that were less than 15 business days and five had turnaround times that were 15 business days or more. These figures represent the highest turnaround times that posts reported among the various application methods that they use. Table 2 lists the total turnaround times for processing visas during peak periods at the five posts that had backlogs that were 15 business days or more in fiscal year 1997. At the consulate in Sao Paulo, Brazil, turnaround times varied depending on the visa processing method involved. In fiscal year 1997, about 63 percent of the consulate’s nonimmigrant visa applications were submitted through travel agents, and about 27 percent were handled through the consulate’s appointment system. The remaining 10 percent were processed using other methods such as a “drop box.” Visa applications submitted through travel agents were subject to a total turnaround period of 10 business days during periods of high demand and less than 5 business days during periods of low demand. Turnaround times for those who requested an appointment to apply for a visa reached as long as 20 days during busy periods—twice the length we noted in our 1992 report on visa-processing backlogs. In nonpeak periods, the turnaround time for those who requested appointments was 9 business days. For fiscal year 1997, approximately 86,000 applicants used the consulate’s appointment system. Consulate officials told us that the turnaround time for applications received through the “drop-box” method is generally kept within 5 business days during both peak and nonpeak periods. State pointed out that, while the Sao Paulo consulate’s turnaround times have increased since 1992, the volume of nonimmigrant visa applications processed in Sao Paulo has also increased from 150,088 in fiscal year 1992 to 319,341 in fiscal year 1997. State reported that the Sao Paulo consulate processed an average of 1,250 nonimmigrant visas per day in fiscal year 1997. During the same period, the number of consular section foreign service officer positions increased from four to seven. In 1995, the Sao Paulo consulate established an appointment system to alleviate long lines outside the consulate that were causing complaints from neighbors and negative reports in the local press. The consulate also began employing appointment delays as a disincentive to applying in person and to encourage applicants to apply for visas through the consulate’s travel agency program—a technique that it considered to be more efficient. As part of this approach, the consulate initiated a practice of not scheduling any appointments on Wednesdays, so that consular officers could concentrate on processing travel agency cases that day. Sao Paulo consular officials told us that this approach had been successful in reducing the length of applicant lines, increasing the use of the consulate’s travel agency program, and improving productivity. On the other hand, the total turnaround time increased for those applying for visas in person through the appointment system. According to the Consul General in Brasilia, the Sao Paulo consulate’s appointment system and its practice of closing to the public on Wednesdays unfairly penalizes applicants that apply in person. He said that the consulate should develop an approach that enables it to provide high levels of service for all application methods. Officials in State’s Bureau of Inter-American Affairs told us that the Brazil Desk received an average of one complaint per week from U.S. companies concerning difficulties that their Brazilian business associates were having in obtaining visas in Sao Paulo. The Consul General in Brasilia said that as many as 10 visa applicants from the Sao Paulo consular district underwent the inconvenience of traveling to and applying for visas in Brasilia each day rather than in Sao Paulo because they had encountered delays and other difficulties in Sao Paulo. He added that an additional unknown number travel to the consulate in Rio de Janeiro each day or simply elect not to travel to the United States at all. Representatives of the travel industry in Brazil told us that, while there have been substantial improvements in reducing visa backlogs and long lines at the Sao Paulo consulate in recent years, they still receive complaints about the length of time that it takes to obtain a U.S. visa in Sao Paulo. A representative of the American Chamber of Commerce in Brazil agreed that there had been improvements in recent years but said that the process remains particularly troublesome for Brazilian business executives who sometimes need to obtain visas on an emergency basis for unexpected business trips to the United States. Consular officers face a number of obstacles to providing expeditious service in processing visas. Inadequate consular staffing at overseas posts and other staffing-related issues were identified as barriers to timely processing of visas by the majority of posts that we reviewed. Other impediments to efficient processing include inadequate computer systems, equipment, and consular facilities. Increased attention devoted to preventing suspect applicants from entering the United States has also led to delays. Similar to what we reported in 1992, consular personnel cited staffing problems as some of the most persistent barriers to processing visas efficiently. Nineteen of the 26 consulates we reviewed reported staffing problems, such as staffing gaps due to transfers of foreign service officers during peak periods or inadequate permanent staffing positions. Of particular concern were staffing gaps that occurred during peak seasons. Since the summer months are among the busiest periods for processing nonimmigrant visas at many posts, consular sections should be operating at full capacity during these periods. However, according to consular officials, they often are not because State’s annual personnel reassignments take place then. A consular official in Bogota told us that the lengthy wait for appointments there was due in large part to extended staffing gaps. Officials in the Bureau of Consular Affairs said that State’s system of mass employee transfers during the summer months is intended to promote fairness in the assignment bidding process and convenience for officers with school-age children, even though it does not result in optimal staff coverage during peak periods. Some consulates reported that, even when all of their authorized positions are filled, staffing levels are inadequate, particularly at posts that have experienced significant increases in visa demand. Figure 1 depicts overseas foreign service officer staffing for visa services and nonimmigrant visa work load trends from fiscal years 1993 through 1996. According to a senior consular official, the hiring of junior officers—the primary source of consular staff support—has not kept pace with foreign service officer attrition over the last several years. This has resulted in staffing shortages in consular sections at many overseas posts. The Bureau of Consular Affairs Program Plan for fiscal years 1998-99 stated that the shortage of consular officers had seriously undermined efforts to meet the increasing demand for consular services. Another staffing issue that consular officials raised concerned State’s process for allocating staff at overseas posts. The Bureau of Consular Affairs does not control assignments of consular positions at overseas posts; rather, State’s geographic bureaus are in charge of these positions. Consular officials said that this arrangement causes delays in reallocating positions to correspond with shifting work loads at various posts. Such reallocations are particularly troublesome when they involve moving positions from one geographic bureau to another. For example, if a U.S. consulate in a Latin American country encountered a significant increase in consular work load while a consulate in East Asia experienced a corresponding decline, the Bureau of Consular Affairs would not have the authority to shift one or more consular positions from one consulate to the other. Rather, it would have to convince the Bureau of East Asian and Pacific Affairs to relinquish the positions and the associated funding, while persuading the Bureau of Inter-American Affairs to accept them. A senior consular official told us that the Bureau of Consular Affairs had recently proposed to the Under Secretary for Management that the Bureau be given greater control over the staffing and funding of overseas consular positions. The official said that the Under Secretary for Management is still considering the proposal. With regard to the adequacy of staffing in Sao Paulo in particular, consulate officials there told us that consular section staffing is insufficient to meet the high demand for nonimmigrant visas. The officials said that, due to transfers of foreign service officers and other factors, the unit had been staffed with a full contingent of authorized positions for only 6 months in the last 2 years. In addition, even when the section is fully staffed, the number of authorized positions is inadequate. At the time of our recent visit to Sao Paulo, the nonimmigrant visa section had seven foreign service officer positions, one of which was vacant. The unit also had 19 foreign national employee positions, including a receptionist, and 4 U.S. family member positions, 1 of which was vacant. Consular section officials said that, to reduce visa backlogs to within 7 working days, they would need two additional foreign service officers, five additional foreign national employees, and two additional U.S. family member employees. The Sao Paulo consular section sometimes employs additional U.S. family members to provide assistance on a temporary basis but has experienced problems securing such staff in time to optimize their help during peak periods. Consulate officials told us that the complexities of the various funding and hiring mechanisms for obtaining temporary staff make it difficult to quickly hire them. The officials added that the low salaries for family member staff also make it hard to attract applicants among the few eligible family members at the post. According to a senior consular official, there are no current plans to address staffing shortages specifically at the consulate in Sao Paulo. The official said that State has staffing shortages worldwide and that it plans to hire new foreign service officers to help deal with the shortages. Sao Paulo’s permanent position staffing needs will be considered along with the needs of other posts as part of the normal resource allocation process. The official added that State has also taken measures to temporarily fill peak season staffing gaps in overseas consular sections. Consular officials pointed to inadequate computer and other equipment as further barriers to efficient visa processing. Fourteen of the 26 consulates we reviewed reported to us that they had such problems. One consulate noted that the vast majority of delays in processing visas were caused by computer equipment and systems failures. Another consulate reported in its “consular package” (an annual report to the Bureau of Consular Affairs on each post’s consular operations) that frequent and prolonged breakdowns in the system for performing name checks on visa applicants had hindered visa processing during the peak summer season. Consular officials told us that there is a need for additional and better auxiliary equipment such as high-capacity fax machines and telephone answering machines. Inadequate physical facilities also impede efficient visa processing at some consulates —a problem noted in our 1992 report as well. Thirteen of the 26 consulates we reviewed identified poor work space or inadequate physical structures as a major impediment to efficient processing. For example, Sao Paulo consular officials said that inadequate space limited their options for dealing with increased demand for visas. To illustrate this problem, the consulate had been able to offer a relatively short turnaround time for former visa holders who dropped off their applications for renewal near the entrance to the consulate grounds; there, a foreign national employee provided information, determined whether the applicant qualified for this method, and checked the applications for completeness. However, there is insufficient physical space to expand the use of this method at this location. Consulate officials told us that they could explore the use of an offsite location for collecting “drop-box” applications. As a result of heightened concerns about terrorism and illegal immigration in recent years, the U.S. government launched a number of initiatives to strengthen U.S. border security. These efforts included financing new technology for providing consular officers with comprehensive information on persons who may represent a threat to U.S. security. Consular officials noted that, although the enhanced systems helped bolster border security, they sometimes resulted in increased visa-processing times. For example, name-check systems now identify many more applicants as potential suspects; therefore, consular officers must take additional time to review these cases in determining eligibility for visas. Achieving an appropriate balance between the competing objectives of facilitating the travel of eligible foreign nationals to the United States and preventing the travel of those considered ineligible poses a difficult challenge for consular officers. Consular officers told us that a renewed emphasis on holding them personally accountable for visa decisions on suspect applicants had led to greater cautiousness and an increase in the number of requests for security advisories from Washington. As a result, while same-day processing of visas used to be commonplace, consular officials told us that greater requirements related to border security had made same-day service more the exception than the rule. State has made a number of changes in an effort to improve its visa-processing operations in recent years, and some of these initiatives could help in overcoming barriers to timely visa issuance. It has devised methods for handling staffing problems and developed a model to better plan for future resource needs at consulates abroad. State has improved computer and telecommunications systems and has other equipment upgrades underway, some of which will help address visa-processing problems. In addition, State has undertaken an initiative to identify and implement better work load management practices for visa processing at overseas posts. However, State has yet to define and integrate time standards as part of its strategy to improve the processing of nonimmigrant visas. Establishing such standards could help in identifying visa-processing backlogs, better equipping State to determine the corrective measures and resources needed. According to a senior consular official, State plans to hire over 200 new foreign service officers in fiscal year 1998 to help solve staffing shortages created by gaps between hiring and attrition levels in recent years. State has also begun experimenting with a number of approaches to fill peak-season staffing gaps at overseas consular sections. For example, the Bureau of Consular Affairs recently established a cooperative program with American University, located in Washington, D.C., to hire and train university students to work in consular positions in Washington, thus allowing the consular personnel that hold these positions to temporarily fill summer staffing gaps overseas. The Bureau also recruits retired foreign service officers to fill overseas consular staffing gaps on a temporary basis and is developing a “consular fellows” pilot program to fill vacant entry-level consular positions. The fellows program involves hiring temporary employees with foreign language skills to serve as consular staff on a short-term basis. State has also expanded the use of temporary employment of U.S. foreign service family members at overseas posts in recent years. Family members often perform administrative and procedural tasks in support of consular officers. Officials at one post told us that extended staffing gaps and shortages had caused them to rely on family member employees to perform a wider range of duties than they had in the past. The officials said that doing so enabled the post to keep its nonimmigrant visa-processing turnaround time under 7 business days. State has developed a consular staffing model based on visa work load and related information that it plans to use to help determine adequate consular staffing and to help identify personnel from surplus areas that could be moved to understaffed ones. The current model does not include foreign national employees—an important element of overall consular staffing at overseas posts. Also, according to one consular official, the model may be based on outdated data that does not take into account the increased visa demand and other changes in some countries. State is refining and updating the model to address these limitations and to factor in the impact of other visa-processing improvement efforts. State made major investments in computer and telecommunications infrastructure in recent years and has other equipment upgrades under way for overseas posts that issue visas. For example, every visa-issuing post now has a machine-readable visa system and automated name-check capability. State has also begun installing second generation upgrades to the machine-readable visa system at posts. State plans to install the necessary hardware and software to run this upgraded system at 100 posts in fiscal year 1998 and to have the system in all visa-issuing posts by the end of fiscal year 1999. The equipment upgrades have resulted in significant improvements in some aspects of visa processing. For example, improvements in some backup systems for name checks now allow visa processing to continue when on-line connections with Washington are not operating. In the past, such disruptions resulted in significant delays in processing visas. More importantly, according to consular officials, the upgrades have resulted in better and more comprehensive information about applicants who might pose a security threat, thus contributing to higher quality decision-making with respect to visa applications. In an effort to identify and implement better work load management practices for visa processing, State established a Consular Workload Management Group in November 1996. Although the effort is still ongoing, the group has already identified a number of practices. Among them were the following: Recorded General Information. This system allows the applicant to get information about the application process without tying up staff resources. A 900-type telephone number, in which the user pays the cost of a call, can be established for this purpose. An Appointment System. An appointment system can reduce the applicant’s waiting time in line and enable the post to control its work load by specifying the number of applicants who can be seen in a given day. Such a system allows an applicant to schedule an interview at a specific date and time. Prescreening. This procedure requires an employee to ask an applicant a few questions and to quickly determine whether the applicant is clearly eligible to receive a visa or whether the applicant must be interviewed by an officer. Noncashier Fee Collection. This process allows applicants to pay the machine-readable visa fee at a bank or other financial institution. The applicant then presents the fee payment receipt when processing the application, thus eliminating the need for a cashier at the post to handle the fee transaction. Travel Agency/Corporate Referral Program. This practice allows posts to designate selected travel agencies and large companies to perform some initial processing of nonimmigrant visa applicants who meet certain criteria. Agencies and companies are trained to ensure that applicants’ documents are in order and are frequently asked to enter pertinent data on the application form. In some cases, agencies and companies forward information to the post electronically, usually via computer diskette. Other practices identified include public information campaigns urging applicants to apply well in advance of their intended travel dates and the use of color-coded boxes to simplify the return of passports on particular days. Some of the practices identified are easy to implement, such as color coding; others are more complex, such as establishing noncashier fee collection systems. The willingness and ability to implement these practices varies by post. According to consular officials, State is currently in the process of identifying posts that are already employing these practices. It is important to note that, while some of these practices can aid in better managing consular work loads, the use of such tools does not guarantee a reduction in visa-processing times. In some cases, these techniques may actually contribute to backlogs, depending on how they are managed. One of the most controversial tools in this respect is the appointment system. According to some consular officials, posts inevitably schedule fewer appointments per day than the number of applicants, causing backlogs and public relations problems. Consular management must deal with increased phone calls and requests for emergency processing when the wait for an appointment becomes unreasonably long. All nine of the surveyed posts that had peak-season backlogs in fiscal year 1997, including the consulate in Sao Paulo, used appointment systems. On the other hand, some high-volume posts that did not use appointment systems managed to keep the total turnaround time for processing visas under 7 business days, even in periods of very high demand. For example, in Rio de Janeiro, the total turnaround time for processing “walk-in” nonimmigrant visa applications was 2 days during peak and nonpeak seasons. The post in Mexico City issued visas the same day that applicants walked in, whether in peak or nonpeak seasons; however, a post official told us that applicants often have to wait for several hours in line. According to Deputy Assistant Secretary for Visa Services, State does not systematically compile information on visa processing turnaround times at overseas posts nor has it established formal timeliness standards for visa processing. State’s consular guidance makes references to the importance of minimizing waiting time and return visits for visa applicants but does not specifically address total turnaround time. On the other hand, State has timeliness standards for issuing passports to U.S. citizens within 25 days after receiving the application. The usefulness of such standards in helping to manage for results is now widely recognized. Some consulates continue to experience backlogs in processing nonimmigrant visas. Although State has taken a number of actions to improve its visa-processing operations, it has not made a systematic effort to identify and address visa-processing backlogs on a global basis. We believe that State’s improvement efforts need to be guided by formal timeliness standards for issuing nonimmigrant visas. Establishing such standards could assist in identifying backlogs, putting State in a better position to determine the resources and actions needed to correct them. Timeliness standards could also help State’s efforts to implement better work load management practices and to improve long-range planning for staffing and other resource needs. To determine the appropriate level and mix of resources needed and to take full advantage of ongoing efforts to improve visa operations, we recommend that the Secretary of State develop timeliness standards for processing nonimmigrant visas. In its written comments on a draft of this report, State said that the report was a balanced and informative account of the problems faced by consular posts abroad. While State did not directly disagree with the report’s recommendation that it develop timeliness standards for processing nonimmigrant visas, State indicated that setting and meeting such standards should be linked to the adequacy of resources. State also expressed concern that timeliness standards might be overemphasized to the detriment of border security goals. State said that imposing rigid standards could adversely affect consular officers’ thoroughness in scrutinizing visa applicants. We agree that setting and meeting timeliness standards should be linked to the adequacy of resources. In fact, we believe that such standards could assist in identifying backlogs, and therefore put State in a better position to determine the level of resources needed to achieve desired levels of both service and security. They could also help State to better manage its resources. We recognize the importance of maintaining quality in the adjudication of visas and believe this element should be built into any timeliness standards or implementing regulations. We also note that some of State’s overseas posts have already established their own timeliness standards for processing nonimmigrant visas and have managed to meet them, even though some of these posts are located in areas considered to be at high risk for visa fraud. We are sending copies of this report to the Secretary of State and interested congressional committees. We will also make copies available to others upon request. Please contact me at (202) 512-4128 if you or any of your staff have any questions concerning this report. The major contributors to this report are listed in appendix III.
Pursuant to a congressional request, GAO reviewed how Department of State consulates process visas for visitors (nonimmigrants) to the United States, focusing on the: (1) extent and nature of visa processing backlogs in Sao Paulo, Brazil, and at other consulates; (2) factors affecting consulates' ability to process nonimmigrant visas in a timely manner; and (3) activities planned or under way to improve nonimmigrant visa processing. GAO noted that: (1) visa processing backlogs are a problem for some consulates, including the one in Sao Paulo; (2) the visa backlogs at the consulates GAO reviewed varied widely, ranging from 8 to 52 days; (3) the longest delays occurred during peak travel periods such as the summer months and winter holiday season; (4) factors that affected consulates' ability to process nonimmigrant visas in a timely manner included inadequate consular staffing and other staffing-related issues as well as inadequate computer systems, facilities, and other equipment; (5) an increased emphasis on preventing the entry of illegal immigrants, terrorists, and other criminals also contributed to delays; (6) State has initiatives under way to address staffing problems, upgrade equipment, and identify and implement practices that could improve visa processing at overseas posts; and (7) however, it does not systematically gather data on visa processing turnaround times and has not yet set specific timeliness standards to help guide its improvement program.
Base operations support services, generally called commercial activities, are the functions necessary to support, operate, and maintain DOD installations. Although the Office of Management and Budget (OMB) identifies 29 services as base support functions, DOD does not have a generally accepted definition of base support services, and the military services differ in how they define them. Without a common definition it is difficult to accurately determine the size and cost of DOD’s base support workforce; however, DOD estimates that base support activities such as facilities and vehicle maintenance, food services, and local transportation cost more than $30 billion in fiscal year 1997. Numerous studies from the 1993 Bottom-Up Review through the recent Quadrennial Defense Review, Defense Reform Initiative, and National Defense Panel have concluded that DOD could realize significant savings by outsourcing commercially available support services. Some studies have concluded that DOD could achieve the largest savings by using a single contract, rather than several smaller contracts, to encompass multiple base operations support services. Although a subject of increasing emphasis in recent years, federal agencies have been encouraged, since 1955, to obtain commercially available goods and services from the private sector through outsourcing, or contracting out, whenever they determine it is cost-effective. In 1966, OMB issued Circular A-76, which established federal policy for the government’s performance of commercial activities and set forth the procedures for studying commercial activities for potential contracting. Later, in 1979, OMB issued a supplemental handbook to the circular that included the procedures for competitively determining whether commercial activities should be performed in-house, by another federal agency through an interservice support agreement, or by the private sector. OMB updated this handbook in 1983 and again in March 1996. Most of the multiple service contracts were initiated when the installation performed a commercial activities study and all but one were established in the 1980s or earlier. As shown in table 1, the estimated costs of the contracts range from $5.4 million to $100 million annually and most were awarded on a fixed-price basis. Single contracts for multiple support services were critical to meeting the overall requirements for base operations support at all 10 installations we reviewed; however, none used a single contract to meet all of its requirements. At 7 of the 10 installations we reviewed, we were told that the decisions to use single contracts for multiple services occurred when the installations performed formal studies to determine whether the commercial activities should be performed in-house or by a contractor. All of these contracting efforts, except one, were initiated in the 1980s. At the other three locations, officials told us that the decision to use a single contract for multiple services was made at the time that the installation or its current mission was established. Of the installations we visited, Laughlin Air Force Base was the one that most recently made a decision to use a single contract for multiple services in connection with a commercial activities study. A contracting official at Laughlin stated that the study had been done as a result of a DOD Management Review Directive. The study, conducted from April 9, 1992, to July 12, 1996, resulted in a contract awarded in 1996 pursuant to a small business set-aside. The contract initially is for about $5.4 million annually and will provide for functions dealing with supply, civil engineering, fuels management, and vehicle operation and maintenance. Naval Air Station Fallon went to a multiple service contract in 1987 following a commercial activities study that was conducted from May 1981 until January 1984. Officials at Fallon could not say for certain but believed the study was conducted because of the priority placed on contracting out at the time. The current contract is the third multiple service contract and is worth about $15 million annually. The contract covers such functions as food service, supply, pest control, custodial, housing, and airfield services, as well as operating a combined bachelor quarters facility. At the U.S. Army Tank-Automotive and Armaments Command in Michigan, we found that the Army had two contracts for multiple base operations support services that followed separate commercial activities studies conducted in 1981 and 1982. Each of the contracts covered services at separate locations that are approximately 20 miles apart. Each contract was competitively awarded until fiscal year 1989 when a decision was made that it would be in the best interest of the government to combine the requirements under a single multiple service contract and reduce overhead and contract administration costs. The current contract for approximately $15 million annually covers such functions as freight, supply, warehousing, facility engineering, housing, and administrative services at the two locations. At three installations—Vance Air Force Base, Arnold Air Force Base, and Naval Submarine Base Bangor—the decision to use multiple service contracts was made at the time the installation was established. The decision at Vance was based on an Air Force decision to evaluate the success of contracting out as compared to another base that performed the services with in-house personnel. At Arnold Air Force Base, the decision was based on a 1950 study by the Scientific Advisory Board of how the engineering development and test center should be operated. According to contracting officials, the study recommendation and a lack of qualified Air Force personnel at the time led the Secretary of the Air Force to direct that the services be provided through a contract. Naval Submarine Base Bangor was activated in 1976 and the decision to contract out base operations support services, according to a contracting official, was based on a study by a private Seattle company that determined a contract operation would be more cost-effective. At 7 of 10 installations we reviewed, contracting officials have awarded fixed-price contracts for multiple base operations support services (see table 2). At the other three locations, contracting officials have awarded cost type contracts. In some instances, incentives or award fees were included within each of these types of contracts to contain or reduce costs. Regardless of the contract type, eight have been awarded on a 5-year basis. A firm-fixed price contract provides for a price that is not subject to any adjustment on the basis of the contractor’s cost experience in performing the contract. It remains firm for the life of the contract unless revised pursuant to the changes clause in the contract. It places maximum risk on the contractor and minimum risk on the government. The contractor is responsible for all costs incurred and the resulting profit or loss. The cost contract places more risk on the government and less risk on the contractor. Under cost contracts, the contractor is reimbursed for all reasonable and allowable costs incurred. In conjunction with these contracts, award fees are often used to provide incentives for outstanding performance in areas such as timeliness, quality, and cost effectiveness. The maximum amount of the award, periods of evaluation, and the officials who determine the fee are specified in an award-fee plan that is part of the contract. With the exception of three contracts, an award fee provision was included to foster maximum contractor performance based upon the government’s subjective evaluation of the contractor’s level of performance. At Fort Irwin, contracting officials decided that a cost-type contract was preferable to fixed price because the workload and workforce were continually changing and requirements could not be adequately defined beforehand. At Vance, the decision to use a fixed-price contract was due to the nature of the contract requirements, where the contractor provides mainly labor, and the number of employees, their respective labor rates, and expected hours were all known. This allowed contracting officials and offerors to estimate the cost of the contract with a higher degree of confidence. The multiple service contracts we reviewed were generally awarded for 5 years (1 base year and 4 option years). However, we did find two contracts with longer performance periods. At Arnold Air Force Base its cost-plus-award-fee contract was awarded in 1995 for 8 years (5 years and a single 3-year option) to foster workforce stability and morale. At the other, Naval Submarine Base Bangor, a 10-year fixed-price-award-fee contract (1 base year and 9 option years) was awarded in 1997. Based on suggestions from contractors during a presolicitation conference, the Bangor contract was increased from 5 to 10 years to save money over the life of the contract by allowing contractors to spread their costs over more years. Officials also expected that the change would encourage more companies to compete for the contract. The contract also includes incentives for the contractor to reduce costs. It further includes a provision for the contractor to meet ISO 9000 standards to better ensure it can meet customer requirements and help reduce contract monitoring costs. At the 10 installations we reviewed, base operations support requirements were being met through a variety of means, including in-house personnel, as well as single contracts for multiple services, single contracts for specific services, and regional contracts. Several of the installations, including Arnold Air Force Base and Naval Submarine Base Bangor, rely heavily on single contracts for multiple base operations support services. In contrast, Fort Belvoir and Naval Air Station Whiting Field use single contracts for multiple services but also rely heavily on other contracts or in-house personnel to meet these support requirements. Arnold has used a single contract for virtually all base operations support services from the time the installation was established. The first multiple service contract was awarded in 1951 and provided for all testing and support services at the installation. In fiscal year 1981, the contract was separated into three contracts—two testing contracts and one for multiple base operations support services. The support services contract includes a wide variety of functions such as central computer operations, base support and maintenance, environmental, utilities, logistics, transportation, base security, and fire protection. In addition, Arnold uses in-house personnel to perform morale, welfare, and recreation services. Similarly, Bangor accomplishes nearly all base operations support service requirements through a single contract. The current contract for multiple support services spans 10 years and provides a wide range of base support services, including administrative support, various public works services, utility and supply services, and security services. Bangor has used a multiple service contract for base operations support services since it was activated as a submarine base in 1976. Bangor has several individual contracts to meet additional support needs such as family services, food preparation and administration, architect and engineering services, and maintenance of automated data processing equipment. Also, Bangor provides services such as morale, welfare, and recreation; automated data processing; and crane inspection and certification through the use of in-house personnel. In contrast, Fort Belvoir and Naval Air Station Whiting Field are using single contracts for multiple services but also rely heavily on other contracts for specific support services, and in-house personnel to meet base operations support requirements. Fort Belvoir’s current multiple support services contract is a firm-fixed price contract for 5 years. The contract includes such services as family housing, grounds, pest control, hospital operations and maintenance, and refuse collection. Fort Belvoir uses other contracts for specific support services such as major road repairs, asbestos removal, and custodial services. Military personnel provide such services as installation security and medical functions at the hospital. In-house civilians provide morale, welfare, and recreation; logistics management; and information management services. Similarly, Whiting Field meets its needs for base operations support through a single contract for multiple support services, several contracts for specific services, and the use of in-house personnel. The current multiple service contract was awarded for 5 years beginning in fiscal year 1997. Services in the contract include waste water treatment, pest control, grounds maintenance, hazardous materials management, communications systems, transportation, and utilities services. The first contract was awarded in fiscal year 1983 for a 3-year period following a commercial activities study. In addition to the multiple service contract, functions such as custodial, military family housing maintenance and repair, aircraft maintenance, and simulation are provided under single service contracts. Services such as morale, welfare, and recreation; fire protection; supply services; ground electronics; and child development are provided by in-house personnel. At the 10 installations we reviewed, the single contracts for multiple support services generally contained a broad range of activities such as refuse collection to computer support. Appendix II identifies selected base operations support activities included in contracts we reviewed. Although contracting officials often use the same or similar terms for services differently, we found that activities such as public works services, pest control, hazardous waste removal, family housing management, and administrative services were generally included in the multiple support service contracts—although differences existed in the degree to which activities within these categories were included in individual contracts. Thus, individual contracts we examined varied in the extent to which the range of activities identified at one installation were comparable with another installation’s contract. Functions and base support activities included in a given contract may vary due to requirements of the installation designing the contract to meet individual needs associated with its mission, the geographical location, and command preferences. For example, contracting officials said that base commanders need the flexibility to determine which functions to include in their multiple service contracts in order to most effectively serve each base’s needs. At Arnold Air Force Base, where the mission is the testing of aerospace hardware, a provision was developed to preclude a manufacturer of aerospace hardware from competing for the contract, thus avoiding a conflict of interest. At Fort Irwin functions such as provost marshall and morale, welfare, and recreation were included because the base was being reactivated, while cooking was excluded so that soldiers could receive training. The geographic location of Laughlin Air Force Base affected services in its contract because, according to officials, the grass in that region grows extremely fast and must be cut frequently. Contracting officials at the 10 installations we reviewed have learned a number of lessons from their experience with single contracts for multiple base operations support services. Among the lessons most often cited were the need for well-developed and articulated requirements, and the importance of award fees and best-value selection criteria where appropriate. Also, while there can be significant advantages to using multiple service contracts, there can be some disadvantages. Well-developed contract requirements articulated in performance work statements were often mentioned as necessary to the successful execution of multiple service contracts. Contracting officials stated that the contract requirements should, in the case of simple tasks, be very specific so there is no question as to what is expected. For more complex situations and needs, results-oriented requirements that describe the government’s desired outcomes without telling the contractor exactly how to accomplish the tasks are preferable. Contracting officials at Naval Submarine Base Bangor stated that the performance work statement in their multiple service contract was a good example of such results-oriented requirements. For example, they cited the area of vehicle maintenance, where the performance work statement states that the contractor is to maintain vehicles in accordance with the manufacturers’ schedules, and that the amount of time that the vehicles are not available for use shall not exceed minimum standards. They also stated a well-defined performance work statement is the key to meeting these requirements and preventing excessive modifications to contracts and unanticipated cost increases. The Laughlin base operations support contract, solicited and awarded by officials at the Air Education Training Command, contains no award-fee provision because the contracting officials believed that the contract was so straightforward and well defined that an award fee was not necessary to get quality service for the base. While at Fort Belvoir, a contracting official stated that many of the modifications to their multiple service contract were due to incorrect inventories of equipment and confusion over what equipment the contractor could use. They also said that when discrepancies arise as to what is required, it is often because requirements are not covered adequately in the contract and the government must modify the contract to get the services needed. Contracting officials at 8 of the 10 installations we reviewed stated that award fees help focus the contractor on feedback from base personnel receiving the services and result in better responsiveness and higher quality work because these awards provide a monetary incentive for outstanding performance. At two of the installations where the contracts did not contain an award fee—Laughlin Air Force Base and Fort Belvoir—we were told that, at the time of the solicitation, the officials involved determined that the contract requirements were not complex enough to warrant the use of an award fee. In the case of Laughlin, the decision not to incorporate an award-fee provision was made by officials not directly located at the installation. According to a contracting official at Laughlin, officials at the Air Education Training Command, who solicited and awarded the Laughlin contract, believed that the contract was straightforward and well defined enough that an award fee was not necessary to get quality service for the base. At Fort Belvoir the contract was transferred to them from the Army Corps of Engineers after it had been solicited and awarded. When the Corps moved from Fort Belvoir, responsibility for contract administration was transferred to the Fort Belvoir Directorate of Contracting. Subsequently, the contracting officials that are currently administering the contracts at Laughlin and Fort Belvoir told us that based on their contract administration experiences, they would prefer to have an award-fee provision because they believe it would ensure improved contractor responsiveness and attention to quality. Contracting officials stated that best-value criteria in selecting a contractor can be important because this allows the contracting agency to avoid selecting contractors that have lower offers but may not have the capability to effectively execute the contract. Through the use of a best- value selection process, a government agency can select an offer from the private sector that is most advantageous to the government, considering price or cost as well as past performance and other noncost factors. The agency does not have to select the lowest priced, acceptable offer. In a commercial activities (A-76) study, the “best value” private sector offer is compared to the government’s in-house proposal on the basis of cost only. Best-value criteria are considered most appropriate when work involves higher levels of complexity, significant technical expertise, and risk. In these situations, the government may be able to obtain a better value by comparing the various private sector technical proposals and making trade-offs between various technical and noncost factors such as past performance as well as costs. Officials explained that a contractor who wins based upon a low price that does not provide adequate profit is less likely to focus on quality or responsiveness and more likely to put forth only minimal effort to retain some profit or cut losses. Such situations can be more expensive for the government because of the cost of modifying contracts or finding a new contractor when one defaults. Our previous outsourcing work identified the benefits and drawbacks of using single contracts for multiple base operations support services.Benefits can include (1) a single manager accountable for performance; (2) greater opportunities for efficiencies, such as reduced overhead; and (3) reduced cost and effort to develop and award one contract versus multiple contracts. Conversely, while single contracts may produce large savings, they do not always succeed and can adversely affect a greater number of activities when problems arise. Contracting officials we spoke with during this review told us that coordination is much easier when there is a single contractor. One official stated that base operations support tasks are often interrelated and require good coordination for smooth operations. This official said that the interrelationships between tasks amplify the benefits of a reduced need for coordination. For example, an official at Laughlin Air Force Base stated that failed coordination between two contractors prevented a third contractor from being able to perform assigned duties, when a dispute occurred over who was responsible for mowing the airfield. This impacted the third contractor’s ability to effectively spray for bugs to help reduce the number of birds attracted to the airfield. As a result, this official stated that the base was not able to fly up to 300 sorties of additional training because birds, attracted to the airfield by the bugs in the long grass, can be pulled into a jet engine causing damage. This contracting official said that if one contractor had been solely responsible for all these tasks the coordination would not have been necessary. In contrast, at Vance Air Force Base these services are provided by one contractor; as a result, all coordination responsibilities lie with that contractor. Fort Irwin had difficulties with a multiple service contract. The size and complexity of Fort Irwin’s contract had grown until the administration and overall management of the contract had become cumbersome and in some cases not responsive to the needs of the installation. For example, the contractor directly supports the training mission of the base by maintaining combat equipment used in training. Officials found that the contractor was more focused on maintaining this equipment than on providing other installation support functions, such as public work services and range, airfield, and training support functions. In 1994, Forces Command conducted a study to determine whether the one multiple service contract was in the government’s best interest. As a result of this study, Fort Irwin’s contract was divided into five separate contracts—two base support contracts to support the logistical and installation support functions and three individual contracts for custodial, food services, and indefinite quantity work. At 3 of the 10 installations we reviewed, small businesses were participating as prime contractors under single contracts for multiple base operations support activities. In each case, the small business was awarded the prime contract under programs designed to assist small and disadvantaged businesses. Participation by small businesses in multiple service contracts is a sensitive issue because the scope of requirements can reduce the ability of small businesses to compete as prime contractors. DOD officials recognize this and have taken actions to enhance small business participation, but Small Business Administration officials remain concerned about the potential impact of multiple service contracts. The Small Business Administration Reauthorization Act for Fiscal Year 1997 adds new provisions to section 15(a) of the Small Business Act, which at the time of our work required federal agencies to consider the effect on small businesses when requirements currently being performed by small business are considered for consolidation. The Reauthorization Act, among other things, instructs agencies, to the “maximum extent practicable—avoid unnecessary and unjustified bundling of contract requirements that precludes small business participation in procurements as prime contractors.” The small business contracts we encountered were awarded under one of two programs. At two locations—Laughlin Air Force Base and Naval Air Station Whiting Field—the contract awards were set aside exclusively for competition among small businesses. At Naval Air Station Whidbey Island, the contract with a small disadvantaged business was negotiated on a sole-source basis directly with the Small Business Administration under section 8(a) of the Small Business Act. Whiting Field performed a commercial activities study in 1983 and awarded the first contract competitively. During the second competition in 1985, a small business was awarded the contract and it has been performed by a small business since then. The current contract is for approximately $6.6 million annually and provides for a range of base functions such as utilities services, grounds, pest control, mail, and fuel distribution. Whidbey Island contracted with a small business for approximately $15.3 million annually to provide base functions such as maintenance of property, grounds, utilities, housing, supply operations, warehousing, and refuse services. This contract was originally awarded to a small business in fiscal year 1987. During the second procurement in fiscal year 1992, it was determined that there would not be enough small businesses to compete; therefore, the solicitation was unrestricted, and the contract was awarded to a large business. When it was time to resolicit the third procurement in fiscal year 1997, the government was contacted by a small disadvantaged firm and the contracting officer worked with the Small Business Administration to subcontract with the small disadvantaged business under the 8(a) program. Small Business Administration and DOD officials are concerned that consolidating multiple base operation services into single contracts may limit the participation of small businesses as prime contractors. Contracting officials also stated that it was difficult for small businesses to compete for multiple service contracts due to the high cost of preparing proposals and the low probability of winning against large business. Small Business Administration officials stated that their primary concern with omnibus contracts is in cases where requirements that were previously performed by small business are consolidated with other contract requirements so that small business participation becomes less likely. They noted that it is generally not to the advantage of small business to have all or many requirements for base operations included in one contract. On October 28, 1996, the Deputy Secretary of Defense issued a policy statement concerning the consolidation of requirements. In it, the Deputy Secretary announced that in planning to consolidate several contracts or requirements, the services must consider the effect on small businesses. According to the Deputy Secretary, requirements cannot preclude small businesses as prime contractors unless a market research analysis shows significant benefits in terms of reduced costs and services or both. The policy statement recognizes the balance that must be maintained between the potential cost benefits that can be obtained through consolidated contracts and the loss of small business participation. The Deputy Secretary’s statement also recognizes the policy of fostering the participation of small business in federal contracting embodied in statutes such as the Small Business Act and section 2323 of title 10 as implemented by the Federal Acquisition Regulation and Defense Federal Acquisition Regulation Supplement. According to several contracting officials, the high cost of preparing a proposal combined with a low probability of winning against large business competition often makes small businesses reluctant to compete for contracts that are not set aside exclusively for small business. For those contracts not awarded to small business, we found only one case where a small contractor competed against a large business. In this case during the evaluation, the small business was determined to be outside the competitive range because it did not fully respond to the scope and terms of the solicitation. The Small Business Administration Reauthorization Act of 1997, among other things, amends the provisions in section 15(a) of the Small Business Act concerning the consolidation of agency requirements. The act requires federal agencies to consider the impact on small businesses’ ability to compete when considering consolidating requirements that have been performed by small businesses into multiple services contracts. The consolidation must be justified by measurable substantial benefits and be subject to review by the agency’s Small Business Administration Procurement Center Representative. Small Business Administration officials told us they are drafting guidelines for federal agencies to follow in implementing this requirement. They expect the guidelines to be completed by September 1998. Although contracting officials reported efficiency gains, cost savings from using single contracts for multiple base operations support functions are not documented. Moreover, at most of the installations savings cannot be easily quantified because once a commercial activities study is completed there is no requirement to track actual savings. Some of the efficiency gains that have been cited include reduced overhead, cross utilization of contract personnel, and increased flexibility. As previously discussed, at 7 of the 10 installations we reviewed, an initial determination had been made that it was cost-effective to contract out base operation support services. The other three installations contracted out from the time of inception of the base or its mission and did not necessitate an A-76 study. Each of the seven installations that performed an A-76 study had determined that the commercial activities could be performed more economically by contracting out. These commercial activities studies involved comparing estimated contract and in-house costs for the specific work to be performed to determine the most cost-effective approach. However, once the decision was made that it was more cost effective to contract for the services, the officials were not required to track actual savings. In this regard, contracting officials told us that because the nature of the requirements being contracted has changed enough over time, any baseline for cost comparisons has been lost. Officials stated that single contracts for multiple base operations services had some obvious efficiency gains that are not available under separate contracts, such as reduced overhead, cross utilizations of contract personnel, and reduced solicitations. For example, contracting officials at Vance, Whidbey Island, and Fallon stated that less work is required to conduct a single competition for a large contract than multiple competitions for smaller contracts. At Warren/Selfridge, Vance, and Bangor, officials told us that the ability to cross-utilize personnel was an advantage. Also, at Warren/Selfridge, Whiting Field, and Whidbey Island officials told us the reduced overhead associated with single contracts for multiple base operations services is an advantage. Single contracts for multiple services were one tool being used to meet base operations support needs at the 10 installations we reviewed. Although some installations received extensive support through a single contract, none received all of their required services through a single contract. The history and characteristics of the contracts varied between the 10 installations and the services obtained through the contracts often reflected differences in mission and geographical location. Comparing and contrasting services between contracts and installations to precisely say what services were included or excluded from individual contracts in comparison with others is difficult because there are no generally accepted definitions for base operations support services. All of this suggests that multiple service contracts need to be tailored to the needs, missions, and other factors effecting individual installations. In commenting orally on a draft of this report, DOD believed our conclusion did not sufficiently recognize that variations in multiple service contracts were necessary and good. Specifically, DOD emphasized that because such contracts are intended to satisfy the individual installation’s requirements a standard contract will not necessarily fit the needs of all installations. We revised the report to reflect that our work suggests that multiple service contracts need to be tailored to the needs, missions, and other factors of importance to the installation. DOD also noted that our report did not recognize all factors that may prevent small businesses from participating in multiple service contracts. Specifically, DOD cited limits on the amount of work that can be subcontracted as a factor, which prevents small businesses from competing as prime contractors. This factor was not identified as a significant issue at the ten installations we reviewed. DOD also noted that our report did not discuss whether small businesses were participating as subcontractors on multiple service contracts. We recognize that small businesses are likely participating as subcontractors, but we did not collect data about subcontracts as it was outside the scope of our work. The Small Business Administration provided technical clarifications, which we incorporated where appropriate. We conducted our review from August 1997 to February 1998 in accordance with generally accepted government audit standards. We are sending copies of this report to the Secretaries of Defense, the Army, the Navy, and the Air Force; the Administrator, Small Business Administration; and the Director, Office of Management and Budget. We will make copies available to others upon request. Please contact me at (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report were Barry W. Holman; Tom Howard; C. Douglas Mills, Jr.; John R. Beauchamp; Patricia F. Blowe; and John Brosnan. To examine the use of single contracts for multiple base operations support services, we held discussions with cognizant Office of the Secretary of Defense (OSD), Army, Navy, and Air Force officials. While no central listing of these contracts existed, these officials were able to identify the following 15 installations as locations that contracted out multiple base operations support services under a single contract. Naval Submarine Base Bangor, Washington Naval Submarine Base Kings Bay, Georgia Naval Air Station Fallon, Nevada Naval Air Station Whidbey Island, Washington Naval Air Station Whiting Field, Florida Naval Air Facility El Centro, California Naval Station Roosevelt Roads, Puerto Rico Naval Security Station, Washington, D.C. U.S. government activities in the Republic of Singapore We did not independently validate the information provided by the services but accepted it as a sample of locations utilizing single contracts to perform multiple base operations support services. From this list, we selected 10 to visit and examine their current multiple service contract. We reviewed legislation, various reports, and studies and held discussions with the Office of the Secretary of Defense; U.S. Headquarters Forces Command; Naval Facility Engineering Command; and Secretary of the Army for Research, Development, and Acquisition officials. Likewise, we held discussions with the Office of Small and Disadvantaged Business Utilization, Office of Assistant Deputy Under Secretary of Defense, the Small Business Administration, and the U.S. Chamber of Commerce concerning implications for small business. To accomplish our objectives, we met with installation and contract officials at 10 installations to review and discuss the management of their base operations support contracts. At seven of these locations, commercial activities studies (A-76) had been performed in the past to determine which functions should be contracted and which kept in-house. We did not attempt to validate these studies; however, we discussed with contracting officials the results of these studies and which functions were contracted as a result of these studies. Locations we reviewed were, Arnold Engineering Development Center, Arnold Air Force Base, Tennessee; Laughlin Air Force Base, Texas; Vance Air Force Base, Oklahoma; Tank-Automotive and Armaments Command, Warren and Selfridge, Michigan; Fort Belvoir, Virginia; Fort Irwin National Training Center, California; Naval Submarine Base Bangor and Naval Air Station Whidbey Island, Washington; Naval Air Station Fallon, Nevada; and Naval Air Station Whiting Field, Florida. To determine the characteristics of multiple service contracts and the kinds of services being procured, we reviewed the current multiple service contract to identify the contract type, length and dates of performance, cost, number of offerors, and contractor, and whether the contractor was a small business. We examined the services contained in the performance work statement to determine which were commonly included or omitted. We discussed with contract officials these characteristics and the history of the multiple service contract at their installation. We also discussed with officials whether single contracts for specific services, regional contracts for specific services, and in-house personnel were used to meet the installations’ base operations support service requirements. To determine what lessons were learned from past and current multiple service contracts and whether cost and efficiency gains were documented, we interviewed contracting officials at each of the 10 installations to find out whether any record or history was maintained. Since we found no formal record of lessons learned or cost and efficiency gains, we obtained agency officials’ opinions on contracting for both current and past multiple service contracts and contracting in general at the installation. Officials also commented on efficiency gains they believed had resulted from the use of multiple service contracts. To ascertain the implications for small businesses when multiple service contracts are utilized, we determined the extent to which small businesses were participating as prime contractors through discussions with contracting officials. Further, we interviewed contracting officials to determine the extent to which small businesses competed for these contracts and, in the cases where a small business won the contract, the method by which the contract was awarded. Additionally, we spoke to officials at the Small Business Administration and the Office of Small and Disadvantaged Business Utilization, Office of the Assistant Deputy Under Secretary of Defense, to learn of their concerns regarding the use of single contracts for multiple services and the implications for small business. We conducted our review from August 1997 to February 1998 in accordance with generally accepted government audit standards. Selected base support functions identified in the multiple service contracts at installations reviewed. Vance Air Force Base is located in Enid, Oklahoma, 90 miles north of Oklahoma City, Oklahoma. Vance is a joint undergraduate pilot training base and home to the 71st Flying Training Wing. Vance meets the majority of its base operations support needs with a single contract for multiple support services and a small number of contracts for single services; some additional services are performed by in-house personnel. Vance currently has a single contract for multiple base operations support services with Northrop-Grumman Technical Services, Inc., that runs from fiscal year 1996 to fiscal year 2000. The contract is a fixed-price-incentive contract with an award fee for 1 base year plus 4 option years and an estimated cost of $40.2 million annually. It provides for services such as communications, supply, civil engineering, transportation, airfield management, and aircraft maintenance. Additionally, some morale, welfare, and recreation functions are included in this contract. According to contracting officials at Vance, they originally awarded a fixed-price multiple service contract in 1960, when the training mission was established at the base. According to contracting officials, the decision to contract for base operations support services was part of an experiment to determine how well and efficiently a contractor-run base could operate compared to another base that used in-house personnel to perform the services. In 1972, the contract was offered for a 5-year period and Northrop-Grumman Technical Services won the contract. Four subsequent competitions for the contract have also gone to Northrop-Grumman Technical Services. The last solicitation had two offerors. Vance also uses other contracts and in-house personnel to meet its base operations support requirements. Specifically, Vance utilizes a single-service contract for simulation instruction, simulator maintenance, and technical support for training aircraft. Other services such as pilot instructors, air traffic control, weather monitoring, and quality assurance are performed by in-house personnel. The Arnold Engineering Development Center at Arnold Air Force Base is an Air Force aerospace ground environmental test center providing testing services for the armed services, National Air and Space Administration, both domestic and foreign commercial aerospace firms, and foreign governments. Located in southern central Tennessee, the center has been operated by contractors under the management of an Air Force commander and staff since 1951. Arnold meets its needs for base operations support by means of one contract for multiple support services and an additional service provided by in-house personnel. Additionally, testing services, once part of the multiple support services contract, are now provided separately in a multiple testing service contract. The current contract is a cost-plus-award-fee contract for 5 years plus a single 3-year option and is worth about $100 million annually. The support contract for fiscal 1996 to fiscal 2003 was won by Aerospace Center Support, a joint venture of Computer Sciences Corporation, DynCorp, and General Physics. The functions included in the support contract include central computer operations, base support and maintenance, environmental, utilities, logistics, transportation, base security, and fire protection. During 1949 and 1950, while Arnold was under construction, a study for the Secretary of the Air Force was conducted by the Scientific Advisory Board. The study, prepared in 1950, recommended that the Arnold Engineering Development Center be operated by a non-profit entity, preferably one sponsored by a large industrial corporation with a variety of technical interests. After reviewing this and other reports and information available at the time, the Secretary of the Air Force decided that the Air Force would be best served by contracting with a for-profit corporation to take advantage of the profit motive. To avoid conflicts of interest, a contract provision was developed to preclude operation by firms involved in the manufacture of hardware amenable to testing at Arnold. The first contract was awarded in 1950 and the test center was established 1951. The contract provided for all testing and support services at the installation. In fiscal year 1981, the contract was separated into two testing contracts and a single mission support contract. The mission support contract provides most of the base operations support services at the installation. In addition, Air Force Services (formerly, morale, welfare, and recreation) are run by in-house Air Force personnel. Laughlin Air Force Base is a pilot training installation located in Del Rio, Texas, 150 miles west of San Antonio, Texas. Laughlin meets its requirements for base operations support through a single contract for multiple support services, several contracts for specific services, and some support services performed by in-house personnel. Laughlin is now in its second year of a 5-year multiple-service-fixed-price contract, which was awarded to a small business in fiscal year 1997 for approximately $5.4 million in the base year and contains no award-fee provision. According to contracting officials, the multiple service contract resulted from a commercial activities study and provides supply, civil engineering operations, and transportation functions. According to a contracting official at Laughlin, the commercial activities study was the result of a DOD Management Review Directive and was conducted from April 9, 1992, until July 12, 1996. The civil engineering functions include facilities management, pest management, plumbing, and utilities, while supply functions include such services as inventory control, computer support, and customer service. Beginning in fiscal year 1999, vehicle maintenance and fuels management services will become part of the multiple service contract. According to a contracting official, this multiple service base operating support contract was awarded to a small business as a result of a small business set aside competition between three small business firms. Laughlin has other single service contracts for such functions as grounds maintenance, custodial services, and transient alert. In-house civilians also perform some services such as aircraft maintenance while military personnel provide the installation security functions. Fort Belvoir, located 18 miles southwest of Washington, D.C., provides support services to the Military District of Washington, the National Capitol Region, and Fort Belvoir tenants. Command of the installation was transferred to the Military District of Washington in October 1988. The current mission is to provide support for the growing number of tenants. According to contract officials, Fort Belvoir meets its base operations support needs through a single contract for multiple support services, contracts for specific support services, and the use of in-house personnel. Contracting officials stated that Fort Belvoir’s current multiple service contract is fixed price for approximately $12 million annually with DynCorp for 5 years. They stated that this is the third contract awarded since the first one in January 1986, and there were three offerors for the current contract. According to a Fort Belvoir official, the first contract resulted from a commercial activities study performed in the early 1980s, and the study was the result of a mandate by the Army’s Training and Doctrine Command, then the Command with authority over Fort Belvoir. Fort Belvoir’s contract includes such functions as family housing operations and maintenance, grounds, pest control, hospital operations and maintenance, and refuse collection. Contracting officials at Fort Belvoir stated that Fort Belvoir uses other contracts for specific support services such as major road repairs, asbestos removal, and custodial services. These officials also stated that military personnel provide such services as installation security and medical functions at the hospital, while in-house civilians provide such services as morale, welfare, and recreation; logistics management; and information management. Fort Irwin is located in the desert of California, approximately 150 miles east of Los Angeles. In 1981, it was activated as the Army’s National Training Center, with the mission of providing realistic joint and combined arms training focused on developing soldiers, leaders, and Army units on the battlefield. Fort Irwin meets its base operations support needs through the use of two multiple service contracts, several contracts for specific support services, one Army-wide contract, and in-house personnel. When the decision was made to activate Fort Irwin as the National Training Center in 1981, a commercial activities study was conducted to determine whether a contract or in-house operation was more cost- effective. The study results demonstrated that a contract operation was more cost-effective and a cost-plus-fixed-fee contract was awarded in fiscal 1982. This contract was recompeted three times. A cost-type contract was used because requirements could not be precisely estimated. In Fort Irwin’s case, the base had been reactivated and there were no existing personnel operations on-site. During an extensive Forces Command review in the early 1990s, it was determined that the size and complexity of the contract had become cumbersome and in some cases not responsive to the installation’s needs. This led to a May 1994 study to determine the most efficient and effective configuration to support the mission. As a result, Fort Irwin divided the multiple service contract into five separate contracts, two multiple service contracts, and three single function contracts for the 1996 resolicitation. The major portion of the existing multiple service contract was split into two cost-plus-award-fee contracts, one for installation support services and the other for logistics support. They were valued at approximately $14.2 million and $35.3 million, respectively. The logistics support contract provides services for tactical and nontactical vehicle maintenance, supply including ammunition, central receiving, and storage/issue/turn-in to name a few. The installation support contract provides a wide range of services such as public works, range, airfield, training support, community activities (morale, welfare, and recreation), and provost marshal. The other three were fixed-price contracts for custodial services, food services, and indefinite quantity work, respectively. Although contracts are used to meet most base support service needs, in-house personnel perform some support functions. Examples of the services provided in-house include cooking, child development services, technical services, supply services, and training support. Additionally, during the breakup of Fort Irwin’s contract, the Army Medical Command decided to take over contracting of the medical support functions—hospital housekeeping and biomedical maintenance. These functions were contracted out Army-wide by the Army Medical Command. Both Warren and Selfridge support activities are under the command of the U.S. Army Tank-Automotive and Armaments Command with Warren being the home of the Command. These activities are located 20 miles apart and 5 miles from Detroit Michigan. The Tank-Automotive and Armaments Command’s mission is to field and support mobility and armament systems. Selfridge is one of these centers and also directs programs that provide support services at Selfridge for personnel and dependents in such areas as housing, morale, safety, environmental and recreational services. The activities’ base operations support needs are met through the use of one contract for multiple services, several contracts for specific support services, and in-house personnel. As a result of separate commercial activities studies conducted approximately 18 years ago, two contracts for multiple support services were awarded. One contract supported Warren, the other Selfridge. However, in 1989, a decision was made that it would be in the best interest of the government to combine these two contracts into a single cost-type contract as a means to reduce overhead and contract administration cost. The current contract was awarded to Serv-Air, Inc., for about $15 million annually for 5 years, from fiscal years 1994 through 1998. This contract includes such services as supply, warehousing, audiovisual, facility engineering, family housing, and administrative services to support the operations of both activities. Contracting officials told us that due to existing contracts at the Warren activity, the custodial and refuse collection services are performed there under separate single function contracts. In-house personnel handle functions such as community family services, engineering and technical services, resource management, information technology, provost marshall, and public affairs services. Naval Submarine Base Bangor is a fully operational shore activity selected as the West Coast Trident submarine base. It is home to 9 nuclear submarines and 54 tenant commands. Bangor is located on the western side of the Puget Sound, outside of Seattle, Washington. Its mission is to provide support to the Trident submarine launched ballistic missile system, maintain and operate facilities for administration and personnel support for operations of the submarine force, and provide logistic support to other activities in the area, and other functions as may be directed by competent authority. Bangor meets its base support needs primarily through a single contract for multiple support services. In addition, Bangor has several contracts for specific services, and utilizes in-house personnel for others. According to a Bangor official, Bangor has contracted for base operations services, since it was activated as a Naval Submarine Base in 1976. Officials stated that it was determined a contract operation would be more cost-effective, based on the results of a commercial firm’s study of all the base tenants and operations. The original contract was a 1-year cost- plus-incentive-fee contract. The current contract is a fixed-price-award-fee contract awarded to Johnson Controls World Services, Inc., for a base price of about $40 million annually. It also includes a provision for the contractor to meet ISO 9000 standards to better ensure they can meet customer requirements and help reduce contract monitoring costs. The term of this contract is 10 years from October 1997 through September 2007. It provides a wide range of base support services, including administrative support, various public works services, utility and supply services, and security services. Contract officials stated the current contract was resolicited for a 10-year period in an effort to attract competition and save money over the life of the contract. Despite this change, there was only one offeror for the contract. Officials stated that due to the current contractors success in collecting a large portion of the maximum possible award fee, other firms did not think that their chances of winning the contract outweighed the cost of preparing a solicitation. Except for the initial contract, the other four solicitations were for 5-year periods each. Bangor has several individual contracts to meet the needs of base operations support functions, such as architect engineering services, electronic and communications equipment, animal control, recreational library services, and maintenance of automated data processing equipment. Also, Bangor provides such services as morale, welfare, and recreation; family services; food preparation and administration; and crane inspection and certification through the use of in-house personnel. Naval Air Station Whidbey Island is located on Whidbey Island in the Puget Sound, Washington. The base mission is to provide the highest quality facilities, services, and products to the naval aviation community and all organizations utilizing the air station. According to contract officials, Whidbey Island meets its needs for base operations support through a single contract for multiple support services, several single contracts for specific services, and the use of in-house personnel. The current fixed-price-award-fee contract was awarded for fiscal years 1997 through 2001 for approximately $15.3 million annually. The contract was negotiated on a sole-source basis with the Small Business Administration pursuant to the 8(a) program with services provided by Chugach Development Corporation. Functions include such services as family housing maintenance, refuse collection, supply operations, grounds and pest control, and utilities services. The current multiple service contract is the third 5-year contract awarded by Whidbey Island. The first contract was awarded in fiscal 1987 as the result of a commercial activities study. Although the multiple service contract provides for a large portion of Whidbey Island’s base operations needs, contracts for specific functions and in-house personnel are also used. Such services as morale, welfare, and recreation; environmental services; aircraft operations; public works engineering; and housing are provided for through the use of in-house personnel. Other services such as janitorial, grounds, indefinite order work for paving, painting, and roofing are provided under single contracts for specific services. Naval Air Station Whiting Field is located approximately 33 miles northeast of Pensacola, Florida, near the city of Milton. The activity includes two major landing fields and is home station of Training Air Wing Five, which consists of three fixed-wing pilot training squadrons and two helicopter pilot training squadrons. In addition, the activity maintains 13 outlying fields in support of the pilot training mission. Whiting Field meets its needs for base operations support through a single contract for multiple support services, several contracts for specific services, and the use of in-house personnel. The current multiple service contract with Tumpane Services Corporation was awarded for approximately $6.6 million in the base year to a small business under the set-aside program. This fixed-price-award-fee contract covering fiscal years 1997 through 2001 was awarded pursuant to a best- value selection process. Functions in the contract include waste water treatment, pest control, grounds maintenance, hazardous materials management, communications systems, transportation, and utilities services. The first contract was awarded in fiscal year 1983 for a 3-year period as the result of a commercial activities study. Since fiscal year 1985, the contract has been recompeted three times, and each time it was awarded to a small business. In addition to the multiple service contract, Whiting Field uses single contracts and in-house personnel to provide base operations support. Services such as morale, welfare, and recreation; fire protection; supply services; ground electronics; and child development are provided through the use of in-house personnel. Other functions such as custodial, military family housing maintenance and repair, aircraft maintenance, and simulation are provided under single service contracts. Naval Air Station Fallon, located 60 miles east of Reno, Nevada, is an air-to-air training facility for naval pilots. According to contract officials, Fallon uses a single contract for multiple support services to provide for a large portion of its base operations support needs, in addition to in-house personnel and some contracts for specific services. The current multiple service contract was awarded to Day-Zimmerman on a fixed-price-award-fee basis and is worth about $15 million annually. The contract covers fiscal years 1998 through 2002. The current contract is the third 5-year contract awarded, with each having a single base year and four individual option years. According to a contracting official at Fallon, the decision to contract for base operations support services at Fallon was the result of a commercial activities study conducted from May 1981 until January 1984. According to this official, the impetus for the study was the desire on the part of the administration of the time to privatize commercial activities at military installations. Contracting officials told us that the first contract was awarded in November 1987 for fiscal year 1988. Some of the base operations support services provided for in the multiple service contract include operating combined bachelors quarters, public works, custodial, airfield management, pest management, transportation, food services, supply, and housing operations. According to contracting officials, in-house personnel provide such functions as locksmith and most of the morale, welfare, and recreation services. These officials stated that contracts for specific services are used to provide such functions as grounds maintenance, fuels handling, aircraft maintenance, and minor construction. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the Department of Defense's (DOD) use of single contracts for multiple base operations support functions, focusing on: (1) the history and characteristics of selected single contracts for multiple base operations support services; (2) the kinds of services procured under these contracts; (3) lessons DOD has learned from the use of these contracts; (4) whether small businesses participate in these contracts; and (5) whether cost and efficiency gains have been documented. GAO noted that: (1) the history and characteristics of selected single contracts for multiple base operations support services varied at the 10 installations GAO reviewed; (2) the decisions to use a single contract for multiple services occurred in two ways; (a) at seven installations, the decision occurred at the time of a commercial activity, or OMB Circular A-76 study; and (b) in the other three cases, the decision was made at the time the installation or its current mission was established; (3) most of the contracts were awarded for 5 years and ranged from about $5.4 million to $100 million annually; (4) although some installations received extensive base operations support services through a single contract, none received all of its required services through a single contract; (5) at all 10 installations, base operations support requirements were met through some combination of single contracts for multiple services; (6) the kinds of services procured under the multiple service contracts also varied and were influenced by a number of factors; (7) comparing and contrasting services between contracts and installations to precisely say what services were included or excluded from individual contracts in comparison with others is difficult because there are no generally accepted definitions for base operations support services; (8) as a result, contracting officials often used the same or similar terms differently; (9) DOD officials at the 10 installations GAO reviewed have learned a number of lessons from their experiences with single contracts for multiple base operation support services; (10) although many contracting officials GAO spoke with stated that coordination is much easier when there is a single contract, they acknowledged problems can still arise; (11) at 3 of the 10 installations GAO reviewed, small businesses were participating in single contracts for multiple base operations support services; (12) in all three cases, the small business was the prime contractor and the contracts were awarded under various small business programs; (13) the Small Business Administration and DOD officials are aware that consolidating multiple base operation services into single contracts may reduce the participation of small business as prime contractors; (14) officials from both agencies have issued guidance for considering small businesses in contract consolidation decisions; and (15) although contracting officials reported efficiency gains, cost savings from using single contracts for multiple base operations support services are not documented.